Liu Hong Yuan Tom commited on
Commit
73eece8
1 Parent(s): 8c7abcb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -1
README.md CHANGED
@@ -10,6 +10,8 @@ tags:
10
  - gemma2
11
  - trl
12
  - sft
 
 
13
  ---
14
 
15
  # Uploaded model
@@ -20,4 +22,36 @@ tags:
20
 
21
  This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
22
 
23
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  - gemma2
11
  - trl
12
  - sft
13
+ datasets:
14
+ - yahma/alpaca-cleaned
15
  ---
16
 
17
  # Uploaded model
 
22
 
23
  This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
24
 
25
+ # Details
26
+
27
+ This model is fine tuned from unsloth/gemma-2-9b-bnb-4bit on the alpaca-cleaned dataset using the **QLoRA** method.
28
+
29
+ This model achieved a loss of 0.923800 on the alpaca-cleaned dataset after step 120.
30
+
31
+ This model follows the alpaca prompt:
32
+
33
+ ```
34
+ Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
35
+
36
+ ### Instruction:
37
+ {}
38
+
39
+ ### Input:
40
+ {}
41
+
42
+ ### Response:
43
+ {}
44
+ ```
45
+
46
+ ## Training
47
+
48
+ This model is trained on a single Tesla T4 GPU.
49
+
50
+ - 1254.1115 seconds used for training.
51
+ - 20.9 minutes used for training.
52
+ - Peak reserved memory = 9.383 GB.
53
+ - Peak reserved memory for training = 2.807 GB.
54
+ - Peak reserved memory % of max memory = 63.622 %.
55
+ - Peak reserved memory for training % of max memory = 19.033 %.
56
+
57
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)