File size: 1,656 Bytes
e9f5656 49f8652 8535687 e9f5656 85e7ca0 8535687 b1e8a0e 8535687 85e7ca0 e9f5656 8535687 b1e8a0e 11394e6 b1e8a0e 85e7ca0 b1e8a0e e9f5656 b1e8a0e 11394e6 3c9c0b8 e46a468 5ed8e92 963fdc9 5ed8e92 e9f5656 5ed8e92 716e45d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
autoGPTQ QLora finetune sample data and script.
```
git lfs install
git clone https://huggingface.co/dahara1/weblab-10b-instruction-sft-GPTQ
cd weblab-10b-instruction-sft-GPTQ/finetune_sample
python3 -m venv gptq_finetune
source gptq_finetune/bin/activate
pip install transformers==4.34.1
pip install datasets
pip install peft==0.5.0
pip install trl
pip install auto-gptq
pip install optimum
pip install torch==2.0.1
# finetune qlora
python3 finetune.py
# use qlora sample
python3 lora_test.py
```
Version is very very important.
For example if you get something like
```
ValueError: Target module QuantLinear() is not supported. Currently, only `torch.nn.Linear` and `Conv1D` are supported.
```
It's because peft old version.
I don't know if it's required, but the version of my running environment.
* auto-gptq 0.4.2
* trl 0.7.2
* optimum 1.13.2
* datasets 2.14.6
The official documentation says to install from source, but sometimes that causes errors.
If you can't get it to work, it might be better to wait until the stable version comes out.
Good luck!
If you encounter ```RuntimeError: Unrecognized tensor type ID: AutocastCUDA```, check your torch version.
auto-gptq 0.4.2 with torch 2.1.0 can't work for me.
- finetune.py gptq finetune sample file.
- jawiki3.csv sample data.(Japanese)
- lora_test.py after finetune, you can use lora with this script.
- checkpoint-700 Created sample LoRA for test.
The model.safetensors is ../gptq_model-4bit-128g.safetensors.
It's samefile. I can't find how to change script defaults model name, So I copied it. |