SGEcon commited on
Commit
455e8c7
1 Parent(s): 459f367

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md CHANGED
@@ -84,12 +84,18 @@ If you wish to use the original data, please contact the original author directl
84
 
85
  ## Training Details
86
 
 
 
 
 
 
87
  - We use QLora to train the base model.
88
  Quantized Low Rank Adapters (QLoRA) is an efficient technique that uses 4-bit quantized pre-trained language models to fine-tune 65 billion parameter models on a 48 GB GPU while significantly reducing memory usage.
89
  The method uses NormalFloat 4-bit (NF4), a new data type that is theoretically optimal for normally distributed weights; Double Quantization, which further quantizes quantization constants to reduce average memory usage; and Paged Optimizers, which manage memory spikes during mini-batch processing, to increase memory efficiency without sacrificing performance.
90
 
91
  - Also, we performed instruction tuning using the data that we collected and the kyujinpy/KOR-OpenOrca-Platypus-v3 dataset on the hugging face.
92
  Instruction tuning is learning in a supervised learning format that uses instructions and input data together as input and output data as a pair.
 
93
 
94
 
95
 
 
84
 
85
  ## Training Details
86
 
87
+ - We train our model with PEFT.
88
+ PEFT is a technique that does not tune all parameters of a model during fine-tuning, but only a small subset of parameters.
89
+ By tuning only a few parameters while leaving others fixed, the model is less likely to suffer from catastrophic forgetting, where the model forgets previously learned tasks when it learns new ones.
90
+ This significantly reduces computation and storage costs.
91
+
92
  - We use QLora to train the base model.
93
  Quantized Low Rank Adapters (QLoRA) is an efficient technique that uses 4-bit quantized pre-trained language models to fine-tune 65 billion parameter models on a 48 GB GPU while significantly reducing memory usage.
94
  The method uses NormalFloat 4-bit (NF4), a new data type that is theoretically optimal for normally distributed weights; Double Quantization, which further quantizes quantization constants to reduce average memory usage; and Paged Optimizers, which manage memory spikes during mini-batch processing, to increase memory efficiency without sacrificing performance.
95
 
96
  - Also, we performed instruction tuning using the data that we collected and the kyujinpy/KOR-OpenOrca-Platypus-v3 dataset on the hugging face.
97
  Instruction tuning is learning in a supervised learning format that uses instructions and input data together as input and output data as a pair.
98
+ In other words, instruction tuning involves fine-tuning a pre-trained model for a specific task or set of tasks, where the model is taught to follow specific instructions or guidelines.
99
 
100
 
101