KallistiTMR commited on
Commit
6e7b030
1 Parent(s): 7707c5c

Upload model

Browse files
Files changed (2) hide show
  1. README.md +168 -0
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -4,6 +4,160 @@ library_name: peft
4
  ## Training procedure
5
 
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  The following `bitsandbytes` quantization config was used during training:
8
  - load_in_8bit: False
9
  - load_in_4bit: True
@@ -27,6 +181,20 @@ The following `bitsandbytes` quantization config was used during training:
27
  - bnb_4bit_compute_dtype: float16
28
  ### Framework versions
29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
  - PEFT 0.4.0
31
 
32
  - PEFT 0.4.0
 
4
  ## Training procedure
5
 
6
 
7
+ The following `bitsandbytes` quantization config was used during training:
8
+ - load_in_8bit: False
9
+ - load_in_4bit: True
10
+ - llm_int8_threshold: 6.0
11
+ - llm_int8_skip_modules: None
12
+ - llm_int8_enable_fp32_cpu_offload: False
13
+ - llm_int8_has_fp16_weight: False
14
+ - bnb_4bit_quant_type: nf4
15
+ - bnb_4bit_use_double_quant: False
16
+ - bnb_4bit_compute_dtype: float16
17
+
18
+ The following `bitsandbytes` quantization config was used during training:
19
+ - load_in_8bit: False
20
+ - load_in_4bit: True
21
+ - llm_int8_threshold: 6.0
22
+ - llm_int8_skip_modules: None
23
+ - llm_int8_enable_fp32_cpu_offload: False
24
+ - llm_int8_has_fp16_weight: False
25
+ - bnb_4bit_quant_type: nf4
26
+ - bnb_4bit_use_double_quant: False
27
+ - bnb_4bit_compute_dtype: float16
28
+
29
+ The following `bitsandbytes` quantization config was used during training:
30
+ - load_in_8bit: False
31
+ - load_in_4bit: True
32
+ - llm_int8_threshold: 6.0
33
+ - llm_int8_skip_modules: None
34
+ - llm_int8_enable_fp32_cpu_offload: False
35
+ - llm_int8_has_fp16_weight: False
36
+ - bnb_4bit_quant_type: nf4
37
+ - bnb_4bit_use_double_quant: False
38
+ - bnb_4bit_compute_dtype: float16
39
+
40
+ The following `bitsandbytes` quantization config was used during training:
41
+ - load_in_8bit: False
42
+ - load_in_4bit: True
43
+ - llm_int8_threshold: 6.0
44
+ - llm_int8_skip_modules: None
45
+ - llm_int8_enable_fp32_cpu_offload: False
46
+ - llm_int8_has_fp16_weight: False
47
+ - bnb_4bit_quant_type: nf4
48
+ - bnb_4bit_use_double_quant: False
49
+ - bnb_4bit_compute_dtype: float16
50
+
51
+ The following `bitsandbytes` quantization config was used during training:
52
+ - load_in_8bit: False
53
+ - load_in_4bit: True
54
+ - llm_int8_threshold: 6.0
55
+ - llm_int8_skip_modules: None
56
+ - llm_int8_enable_fp32_cpu_offload: False
57
+ - llm_int8_has_fp16_weight: False
58
+ - bnb_4bit_quant_type: nf4
59
+ - bnb_4bit_use_double_quant: False
60
+ - bnb_4bit_compute_dtype: float16
61
+
62
+ The following `bitsandbytes` quantization config was used during training:
63
+ - load_in_8bit: False
64
+ - load_in_4bit: True
65
+ - llm_int8_threshold: 6.0
66
+ - llm_int8_skip_modules: None
67
+ - llm_int8_enable_fp32_cpu_offload: False
68
+ - llm_int8_has_fp16_weight: False
69
+ - bnb_4bit_quant_type: nf4
70
+ - bnb_4bit_use_double_quant: False
71
+ - bnb_4bit_compute_dtype: float16
72
+
73
+ The following `bitsandbytes` quantization config was used during training:
74
+ - load_in_8bit: False
75
+ - load_in_4bit: True
76
+ - llm_int8_threshold: 6.0
77
+ - llm_int8_skip_modules: None
78
+ - llm_int8_enable_fp32_cpu_offload: False
79
+ - llm_int8_has_fp16_weight: False
80
+ - bnb_4bit_quant_type: nf4
81
+ - bnb_4bit_use_double_quant: False
82
+ - bnb_4bit_compute_dtype: float16
83
+
84
+ The following `bitsandbytes` quantization config was used during training:
85
+ - load_in_8bit: False
86
+ - load_in_4bit: True
87
+ - llm_int8_threshold: 6.0
88
+ - llm_int8_skip_modules: None
89
+ - llm_int8_enable_fp32_cpu_offload: False
90
+ - llm_int8_has_fp16_weight: False
91
+ - bnb_4bit_quant_type: nf4
92
+ - bnb_4bit_use_double_quant: False
93
+ - bnb_4bit_compute_dtype: float16
94
+
95
+ The following `bitsandbytes` quantization config was used during training:
96
+ - load_in_8bit: False
97
+ - load_in_4bit: True
98
+ - llm_int8_threshold: 6.0
99
+ - llm_int8_skip_modules: None
100
+ - llm_int8_enable_fp32_cpu_offload: False
101
+ - llm_int8_has_fp16_weight: False
102
+ - bnb_4bit_quant_type: nf4
103
+ - bnb_4bit_use_double_quant: False
104
+ - bnb_4bit_compute_dtype: float16
105
+
106
+ The following `bitsandbytes` quantization config was used during training:
107
+ - load_in_8bit: False
108
+ - load_in_4bit: True
109
+ - llm_int8_threshold: 6.0
110
+ - llm_int8_skip_modules: None
111
+ - llm_int8_enable_fp32_cpu_offload: False
112
+ - llm_int8_has_fp16_weight: False
113
+ - bnb_4bit_quant_type: nf4
114
+ - bnb_4bit_use_double_quant: False
115
+ - bnb_4bit_compute_dtype: float16
116
+
117
+ The following `bitsandbytes` quantization config was used during training:
118
+ - load_in_8bit: False
119
+ - load_in_4bit: True
120
+ - llm_int8_threshold: 6.0
121
+ - llm_int8_skip_modules: None
122
+ - llm_int8_enable_fp32_cpu_offload: False
123
+ - llm_int8_has_fp16_weight: False
124
+ - bnb_4bit_quant_type: nf4
125
+ - bnb_4bit_use_double_quant: False
126
+ - bnb_4bit_compute_dtype: float16
127
+
128
+ The following `bitsandbytes` quantization config was used during training:
129
+ - load_in_8bit: False
130
+ - load_in_4bit: True
131
+ - llm_int8_threshold: 6.0
132
+ - llm_int8_skip_modules: None
133
+ - llm_int8_enable_fp32_cpu_offload: False
134
+ - llm_int8_has_fp16_weight: False
135
+ - bnb_4bit_quant_type: nf4
136
+ - bnb_4bit_use_double_quant: False
137
+ - bnb_4bit_compute_dtype: float16
138
+
139
+ The following `bitsandbytes` quantization config was used during training:
140
+ - load_in_8bit: False
141
+ - load_in_4bit: True
142
+ - llm_int8_threshold: 6.0
143
+ - llm_int8_skip_modules: None
144
+ - llm_int8_enable_fp32_cpu_offload: False
145
+ - llm_int8_has_fp16_weight: False
146
+ - bnb_4bit_quant_type: nf4
147
+ - bnb_4bit_use_double_quant: False
148
+ - bnb_4bit_compute_dtype: float16
149
+
150
+ The following `bitsandbytes` quantization config was used during training:
151
+ - load_in_8bit: False
152
+ - load_in_4bit: True
153
+ - llm_int8_threshold: 6.0
154
+ - llm_int8_skip_modules: None
155
+ - llm_int8_enable_fp32_cpu_offload: False
156
+ - llm_int8_has_fp16_weight: False
157
+ - bnb_4bit_quant_type: nf4
158
+ - bnb_4bit_use_double_quant: False
159
+ - bnb_4bit_compute_dtype: float16
160
+
161
  The following `bitsandbytes` quantization config was used during training:
162
  - load_in_8bit: False
163
  - load_in_4bit: True
 
181
  - bnb_4bit_compute_dtype: float16
182
  ### Framework versions
183
 
184
+ - PEFT 0.4.0
185
+ - PEFT 0.4.0
186
+ - PEFT 0.4.0
187
+ - PEFT 0.4.0
188
+ - PEFT 0.4.0
189
+ - PEFT 0.4.0
190
+ - PEFT 0.4.0
191
+ - PEFT 0.4.0
192
+ - PEFT 0.4.0
193
+ - PEFT 0.4.0
194
+ - PEFT 0.4.0
195
+ - PEFT 0.4.0
196
+ - PEFT 0.4.0
197
+ - PEFT 0.4.0
198
  - PEFT 0.4.0
199
 
200
  - PEFT 0.4.0
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:89fef4a825e1da039b59e7f616d3ac58c4aab3d18bd9403b6a7d5668a0772666
3
  size 134263757
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b68316159f48ff2ec0e4e69ecf0fe5c2d714c559eec4db822267ad0b809dfd5f
3
  size 134263757