File size: 1,156 Bytes
716e45d 49f8652 8535687 b1e8a0e 8535687 b1e8a0e 11394e6 b1e8a0e 11394e6 b1e8a0e 11394e6 3c9c0b8 5ed8e92 963fdc9 5ed8e92 716e45d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
Lora finetune sample data and script.
```
pip install transformers
pip install datasets
pip install peft==0.5.0
pip install trl
pip install auto-gptq
pip install optimum
```
Version is very very important.
For example if you get something like
```
ValueError: Target module QuantLinear() is not supported. Currently, only `torch.nn.Linear` and `Conv1D` are supported.
```
It's because peft old version.
I don't know if it's required, but the version of my running environment.
* auto-gptq 0.4.1+cu117
* trl 0.7.1
* optimum 1.12.1.dev0
* transformers 4.32.1
* datasets 2.14.4
The documentation says to install from source, but sometimes that causes errors.
If you can't get it to work, it might be better to wait until the stable version comes out.
Good luck!
- finetune.py gptq finetune sample file.
- jawiki3.csv sample data.(Japanese)
- lora_test.py after finetune, you can use lora with this script.
The model.safetensors is ../gptq_model-4bit-128g.safetensors.
It's samefile. I can't find how to change script defaults model name, So I copied it. |