Update README.md
Browse files
README.md
CHANGED
@@ -6,8 +6,12 @@ license: apache-2.0
|
|
6 |
This is unsloth/llama-3-8b-Instruct trained on the Replete-AI/code-test-dataset using the code bellow with unsloth and google colab with under 15gb of vram. This training was complete in about 40 minutes total.
|
7 |
|
8 |
Copy from my announcement in my discord:
|
9 |
-
```
|
10 |
-
If anyone wants to train their own llama-3-8b model for free on any dataset
|
|
|
|
|
|
|
|
|
11 |
```
|
12 |
|
13 |
For anyone that is new to coding and training Ai, all your really have to edit is
|
@@ -85,6 +89,7 @@ model = FastLanguageModel.get_peft_model(
|
|
85 |
```
|
86 |
|
87 |
```Python
|
|
|
88 |
alpaca_prompt = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
89 |
|
90 |
Below is an instruction that describes a task, Write a response that appropriately completes the request.<|eot_id|><|start_header_id|>user<|end_header_id|>
|
|
|
6 |
This is unsloth/llama-3-8b-Instruct trained on the Replete-AI/code-test-dataset using the code bellow with unsloth and google colab with under 15gb of vram. This training was complete in about 40 minutes total.
|
7 |
|
8 |
Copy from my announcement in my discord:
|
9 |
+
```
|
10 |
+
If anyone wants to train their own llama-3-8b model for free on any dataset
|
11 |
+
that has around 1,500 lines of data or less you can now do it easily by using
|
12 |
+
the code I provided in the model card for my test model in this repo and
|
13 |
+
google colab. The training for this model uses (Unsloth + Qlora + Galore) to
|
14 |
+
achieve the ability for training under such low vram.
|
15 |
```
|
16 |
|
17 |
For anyone that is new to coding and training Ai, all your really have to edit is
|
|
|
89 |
```
|
90 |
|
91 |
```Python
|
92 |
+
|
93 |
alpaca_prompt = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
94 |
|
95 |
Below is an instruction that describes a task, Write a response that appropriately completes the request.<|eot_id|><|start_header_id|>user<|end_header_id|>
|