Text Generation
PEFT
English
egon-nlpulse commited on
Commit
2436d09
·
1 Parent(s): 4d15db2
Files changed (1) hide show
  1. README.md +108 -1
README.md CHANGED
@@ -1,6 +1,111 @@
1
  ---
2
  library_name: peft
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ## Training procedure
5
 
6
 
@@ -14,7 +119,9 @@ The following `bitsandbytes` quantization config was used during training:
14
  - bnb_4bit_quant_type: nf4
15
  - bnb_4bit_use_double_quant: True
16
  - bnb_4bit_compute_dtype: bfloat16
 
 
17
  ### Framework versions
18
 
19
 
20
- - PEFT 0.4.0.dev0
 
1
  ---
2
  library_name: peft
3
+ license: apache-2.0
4
+ language:
5
+ - en
6
+ datasets:
7
+ - Abirate/english_quotes
8
  ---
9
+
10
+ # Quantization 4Bits - 5.02 GB GPU memory usage for inference:
11
+
12
+ ```
13
+ $ nvidia-smi
14
+ +-----------------------------------------------------------------------------+
15
+ | NVIDIA-SMI 525.125.06 Driver Version: 525.125.06 CUDA Version: 12.0 |
16
+ |-------------------------------+----------------------+----------------------+
17
+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
18
+ | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
19
+ | | | MIG M. |
20
+ |===============================+======================+======================|
21
+ | 1 NVIDIA GeForce ... Off | 00000000:04:00.0 Off | N/A |
22
+ | 65% 74C P2 169W / 170W | 5028MiB / 12288MiB | 97% Default |
23
+ | | | N/A |
24
+ +-------------------------------+----------------------+----------------------+
25
+ ```
26
+
27
+ ## Fine-tuning
28
+ ```
29
+ 3 epochs, all dataset samples (split=train), 939 steps
30
+ 1 x GPU NVidia RTX 3060 12GB - max. GPU memory: 6.85 GB
31
+ Duration: 1h54min
32
+
33
+
34
+ $ nvidia-smi && free -h
35
+ +-----------------------------------------------------------------------------+
36
+ | NVIDIA-SMI 525.125.06 Driver Version: 525.125.06 CUDA Version: 12.0 |
37
+ |-------------------------------+----------------------+----------------------+
38
+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
39
+ | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
40
+ | | | MIG M. |
41
+ |===============================+======================+======================|
42
+ | 1 NVIDIA GeForce ... Off | 00000000:04:00.0 Off | N/A |
43
+ |100% 87C P2 168W / 170W | 6854MiB / 12288MiB | 98% Default |
44
+ | | | N/A |
45
+ +-------------------------------+----------------------+----------------------+
46
+ total used free shared buff/cache available
47
+ Mem: 77Gi 13Gi 1.1Gi 116Mi 63Gi 63Gi
48
+ Swap: 37Gi 3.8Gi 34Gi
49
+
50
+ ```
51
+
52
+ ## Inference
53
+ ```
54
+ import os
55
+ import torch
56
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
57
+ from peft import PeftConfig, PeftModel
58
+
59
+ model_path = "nlpulse/llama2-7b-chat-english_quotes"
60
+
61
+ # tokenizer
62
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
63
+ tokenizer.pad_token = tokenizer.eos_token
64
+
65
+ # quantization config
66
+ quant_config = BitsAndBytesConfig(
67
+ load_in_4bit=True,
68
+ bnb_4bit_use_double_quant=True,
69
+ bnb_4bit_quant_type="nf4",
70
+ bnb_4bit_compute_dtype=torch.bfloat16
71
+ )
72
+
73
+ # model adapter PEFT LoRA
74
+ config = PeftConfig.from_pretrained(model_path)
75
+ model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, quantization_config=quant_config, device_map={"":0},
76
+ use_auth_token=True)
77
+ model = PeftModel.from_pretrained(model, model_path)
78
+
79
+ # inference
80
+ device = "cuda"
81
+ text_list = ["Ask not what your country", "Be the change that", "You only live once, but", "I'm selfish, impatient and"]
82
+ for text in text_list:
83
+ inputs = tokenizer(text, return_tensors="pt").to(device)
84
+ outputs = model.generate(**inputs, max_new_tokens=60)
85
+ print('>> ', text, " => ", tokenizer.decode(outputs[0], skip_special_tokens=True))
86
+
87
+ ```
88
+
89
+ ## Requirements
90
+ ```
91
+ pip install -U bitsandbytes
92
+ pip install -U git+https://github.com/huggingface/transformers.git
93
+ pip install -U git+https://github.com/huggingface/peft.git
94
+ pip install -U accelerate
95
+ pip install -U datasets
96
+ pip install -U scipy
97
+ ```
98
+
99
+ ## Scripts
100
+ [https://github.com/nlpulse-io/sample_codes/tree/main/fine-tuning/peft_quantization_4bits/llama2-7b-chat](https://github.com/nlpulse-io/sample_codes/tree/main/fine-tuning/peft_quantization_4bits/llama2-7b-chat)
101
+
102
+
103
+ ## References
104
+ [QLoRa: Fine-Tune a Large Language Model on Your GPU](https://towardsdatascience.com/qlora-fine-tune-a-large-language-model-on-your-gpu-27bed5a03e2b)
105
+
106
+ [Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes)
107
+
108
+
109
  ## Training procedure
110
 
111
 
 
119
  - bnb_4bit_quant_type: nf4
120
  - bnb_4bit_use_double_quant: True
121
  - bnb_4bit_compute_dtype: bfloat16
122
+
123
+
124
  ### Framework versions
125
 
126
 
127
+ - PEFT 0.4.0.dev0