etechoptimist commited on
Commit
ab903ef
1 Parent(s): 40493c8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -2
README.md CHANGED
@@ -6,10 +6,38 @@ pipeline_tag: text-classification
6
  language:
7
  - en
8
  ---
9
- ## Training procedure
10
 
 
11
 
12
- ### Framework versions
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
 
15
  - PEFT 0.5.0
 
6
  language:
7
  - en
8
  ---
9
+ In this project - [notebook](https://github.com/etechoptimist/generative_ai/blob/master/peft_foundationmodels_adaptation/LightweightFineTuning.ipynb), I utilized LoRA (Low-Rank Adaptation) to fine-tune DistilGPT2, a foundation model, for a sequence classification task using the SST-2 dataset from the GLUE benchmark. The following steps were performed to implement and adapt the model efficiently:
10
 
11
+ ### 1.1.Model and Tokenizer Setup:
12
 
13
+ I started by loading DistilGPT2, a compact variant of GPT-2, using the Hugging Face AutoModelForSequenceClassification class. This base model was configured for a binary classification task with two labels: positive and negative.
14
+
15
+ I also loaded the corresponding DistilGPT2 tokenizer, ensuring proper tokenization and padding, especially since GPT-2 models typically do not have a padding token by default.
16
+
17
+ ### 1.2. Dataset: SST-2 from GLUE Benchmark:
18
+
19
+ The Stanford Sentiment Treebank (SST-2) dataset from the GLUE benchmark was used for training and evaluation. SST-2 is a sentiment classification dataset consisting of movie reviews, where each review is labeled as either positive (1) or negative (0).
20
+ Given that the dataset exhibited a slight imbalance between the number of positive and negative samples, additional steps were taken to mitigate this imbalance. In essence , I used the F2 score that gives more relevance to false negatives. The next articles were crucial to handle imbalance classes.
21
+
22
+ https://machinelearningmastery.com/types-of-classification-in-machine-learning/
23
+ https://machinelearningmastery.com/tour-of-evaluation-metrics-for-imbalanced-classification/
24
+ https://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset/
25
+
26
+
27
+ ### 1.3 Applying LoRA for Parameter-Efficient Fine-Tuning:
28
+
29
+ To efficiently fine-tune the model with minimal trainable parameters, I applied LoRA using the PEFT (Parameter-Efficient Fine-Tuning) library.
30
+ LoRA was specifically applied to the attention layers of the base model, introducing low-rank adaptations that allow the model to be fine-tuned without updating all of its parameters. This reduces the memory and computational requirements compared to traditional fine-tuning.
31
+
32
+ ### 1.4 Training the LoRA-Adapted Model:
33
+
34
+ I used Hugging Face’s Trainer API to fine-tune the LoRA-enhanced DistilGPT2 model on the SST-2 dataset.
35
+ The training loop was configured to evaluate F2 Score at each epoch, and I ensured efficient memory usage by utilizing GPU acceleration when available.
36
+
37
+ ### 1.5 Evaluation and Saving the Fine-Tuned Model:
38
+
39
+ After training, I evaluated the model’s performance on the validation set, focusing on F2-score to measure how well the model handled false negatives.
40
+ Finally, I saved the fine-tuned LoRA model using the PeftModel.save_pretrained() method, making it available for further inference or fine-tuning tasks.
41
 
42
 
43
  - PEFT 0.5.0