Update README.md
Browse files
README.md
CHANGED
@@ -22,9 +22,9 @@ This is a Lora trained on llama2 7B Chat, with its dataset consisting of a large
|
|
22 |
The fine-tuning dataset used consists of a small number of personally written conversations and a large amount of AI-generated dialogue content based on these, utilizing the alpaca-format. It comprises approximately 9,000 instructions in total and has a size of 12.6MB.
|
23 |
|
24 |
## Training
|
25 |
-
使用
|
26 |
|
27 |
-
Using
|
28 |
|
29 |
|
30 |
## Using
|
|
|
22 |
The fine-tuning dataset used consists of a small number of personally written conversations and a large amount of AI-generated dialogue content based on these, utilizing the alpaca-format. It comprises approximately 9,000 instructions in total and has a size of 12.6MB.
|
23 |
|
24 |
## Training
|
25 |
+
使用UnslothAI提供的Alpaca + Llama-3 8b Unsloth 2x faster finetuning.ipynb,於Google cloab上調用L4進行微調,訓練參數除將max_steps=60,改為num_train_epochs = 1外其他參數照舊
|
26 |
|
27 |
+
Using UnsothAI's "Alpaca + Llama-3 8b Unsloth 2x faster finetuning.ipynb" on colab,using L4 GPU to fintuning,only change max_steps=60 to num_train_epochs = 1 in TrainingArguments
|
28 |
|
29 |
|
30 |
## Using
|