egon-nlpulse
commited on
Commit
•
17ff838
1
Parent(s):
714fa24
ajustes
Browse files
README.md
CHANGED
@@ -25,7 +25,6 @@ $ nvidia-smi
|
|
25 |
```
|
26 |
|
27 |
## Fine-tuning
|
28 |
-
Details:
|
29 |
```
|
30 |
3 epochs, all dataset samples (split=train), 939 steps
|
31 |
1 x GPU NVidia RTX 3060 12GB - max. GPU memory: 7.44 GB
|
@@ -82,7 +81,7 @@ for text in text_list:
|
|
82 |
|
83 |
```
|
84 |
|
85 |
-
|
86 |
```
|
87 |
pip install -q -U bitsandbytes
|
88 |
pip install -q -U git+https://github.com/huggingface/transformers.git
|
@@ -96,5 +95,5 @@ pip install -q -U scipy
|
|
96 |
[https://github.com/nlpulse-io/sample_codes/tree/main/fine-tuning/peft_quantization_4bits/gptj-6b](https://github.com/nlpulse-io/sample_codes/tree/main/fine-tuning/peft_quantization_4bits/gptj-6b)
|
97 |
|
98 |
|
99 |
-
|
100 |
[https://towardsdatascience.com/qlora-fine-tune-a-large-language-model-on-your-gpu-27bed5a03e2b](https://towardsdatascience.com/qlora-fine-tune-a-large-language-model-on-your-gpu-27bed5a03e2b)
|
|
|
25 |
```
|
26 |
|
27 |
## Fine-tuning
|
|
|
28 |
```
|
29 |
3 epochs, all dataset samples (split=train), 939 steps
|
30 |
1 x GPU NVidia RTX 3060 12GB - max. GPU memory: 7.44 GB
|
|
|
81 |
|
82 |
```
|
83 |
|
84 |
+
## Requirements
|
85 |
```
|
86 |
pip install -q -U bitsandbytes
|
87 |
pip install -q -U git+https://github.com/huggingface/transformers.git
|
|
|
95 |
[https://github.com/nlpulse-io/sample_codes/tree/main/fine-tuning/peft_quantization_4bits/gptj-6b](https://github.com/nlpulse-io/sample_codes/tree/main/fine-tuning/peft_quantization_4bits/gptj-6b)
|
96 |
|
97 |
|
98 |
+
## References
|
99 |
[https://towardsdatascience.com/qlora-fine-tune-a-large-language-model-on-your-gpu-27bed5a03e2b](https://towardsdatascience.com/qlora-fine-tune-a-large-language-model-on-your-gpu-27bed5a03e2b)
|