KallistiTMR commited on
Commit
ea87b5d
1 Parent(s): 5af8167

Upload model

Browse files
Files changed (2) hide show
  1. README.md +144 -0
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -4,6 +4,138 @@ library_name: peft
4
  ## Training procedure
5
 
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  The following `bitsandbytes` quantization config was used during training:
8
  - load_in_8bit: False
9
  - load_in_4bit: True
@@ -27,6 +159,18 @@ The following `bitsandbytes` quantization config was used during training:
27
  - bnb_4bit_compute_dtype: float16
28
  ### Framework versions
29
 
 
 
 
 
 
 
 
 
 
 
 
 
30
  - PEFT 0.4.0
31
 
32
  - PEFT 0.4.0
 
4
  ## Training procedure
5
 
6
 
7
+ The following `bitsandbytes` quantization config was used during training:
8
+ - load_in_8bit: False
9
+ - load_in_4bit: True
10
+ - llm_int8_threshold: 6.0
11
+ - llm_int8_skip_modules: None
12
+ - llm_int8_enable_fp32_cpu_offload: False
13
+ - llm_int8_has_fp16_weight: False
14
+ - bnb_4bit_quant_type: nf4
15
+ - bnb_4bit_use_double_quant: False
16
+ - bnb_4bit_compute_dtype: float16
17
+
18
+ The following `bitsandbytes` quantization config was used during training:
19
+ - load_in_8bit: False
20
+ - load_in_4bit: True
21
+ - llm_int8_threshold: 6.0
22
+ - llm_int8_skip_modules: None
23
+ - llm_int8_enable_fp32_cpu_offload: False
24
+ - llm_int8_has_fp16_weight: False
25
+ - bnb_4bit_quant_type: nf4
26
+ - bnb_4bit_use_double_quant: False
27
+ - bnb_4bit_compute_dtype: float16
28
+
29
+ The following `bitsandbytes` quantization config was used during training:
30
+ - load_in_8bit: False
31
+ - load_in_4bit: True
32
+ - llm_int8_threshold: 6.0
33
+ - llm_int8_skip_modules: None
34
+ - llm_int8_enable_fp32_cpu_offload: False
35
+ - llm_int8_has_fp16_weight: False
36
+ - bnb_4bit_quant_type: nf4
37
+ - bnb_4bit_use_double_quant: False
38
+ - bnb_4bit_compute_dtype: float16
39
+
40
+ The following `bitsandbytes` quantization config was used during training:
41
+ - load_in_8bit: False
42
+ - load_in_4bit: True
43
+ - llm_int8_threshold: 6.0
44
+ - llm_int8_skip_modules: None
45
+ - llm_int8_enable_fp32_cpu_offload: False
46
+ - llm_int8_has_fp16_weight: False
47
+ - bnb_4bit_quant_type: nf4
48
+ - bnb_4bit_use_double_quant: False
49
+ - bnb_4bit_compute_dtype: float16
50
+
51
+ The following `bitsandbytes` quantization config was used during training:
52
+ - load_in_8bit: False
53
+ - load_in_4bit: True
54
+ - llm_int8_threshold: 6.0
55
+ - llm_int8_skip_modules: None
56
+ - llm_int8_enable_fp32_cpu_offload: False
57
+ - llm_int8_has_fp16_weight: False
58
+ - bnb_4bit_quant_type: nf4
59
+ - bnb_4bit_use_double_quant: False
60
+ - bnb_4bit_compute_dtype: float16
61
+
62
+ The following `bitsandbytes` quantization config was used during training:
63
+ - load_in_8bit: False
64
+ - load_in_4bit: True
65
+ - llm_int8_threshold: 6.0
66
+ - llm_int8_skip_modules: None
67
+ - llm_int8_enable_fp32_cpu_offload: False
68
+ - llm_int8_has_fp16_weight: False
69
+ - bnb_4bit_quant_type: nf4
70
+ - bnb_4bit_use_double_quant: False
71
+ - bnb_4bit_compute_dtype: float16
72
+
73
+ The following `bitsandbytes` quantization config was used during training:
74
+ - load_in_8bit: False
75
+ - load_in_4bit: True
76
+ - llm_int8_threshold: 6.0
77
+ - llm_int8_skip_modules: None
78
+ - llm_int8_enable_fp32_cpu_offload: False
79
+ - llm_int8_has_fp16_weight: False
80
+ - bnb_4bit_quant_type: nf4
81
+ - bnb_4bit_use_double_quant: False
82
+ - bnb_4bit_compute_dtype: float16
83
+
84
+ The following `bitsandbytes` quantization config was used during training:
85
+ - load_in_8bit: False
86
+ - load_in_4bit: True
87
+ - llm_int8_threshold: 6.0
88
+ - llm_int8_skip_modules: None
89
+ - llm_int8_enable_fp32_cpu_offload: False
90
+ - llm_int8_has_fp16_weight: False
91
+ - bnb_4bit_quant_type: nf4
92
+ - bnb_4bit_use_double_quant: False
93
+ - bnb_4bit_compute_dtype: float16
94
+
95
+ The following `bitsandbytes` quantization config was used during training:
96
+ - load_in_8bit: False
97
+ - load_in_4bit: True
98
+ - llm_int8_threshold: 6.0
99
+ - llm_int8_skip_modules: None
100
+ - llm_int8_enable_fp32_cpu_offload: False
101
+ - llm_int8_has_fp16_weight: False
102
+ - bnb_4bit_quant_type: nf4
103
+ - bnb_4bit_use_double_quant: False
104
+ - bnb_4bit_compute_dtype: float16
105
+
106
+ The following `bitsandbytes` quantization config was used during training:
107
+ - load_in_8bit: False
108
+ - load_in_4bit: True
109
+ - llm_int8_threshold: 6.0
110
+ - llm_int8_skip_modules: None
111
+ - llm_int8_enable_fp32_cpu_offload: False
112
+ - llm_int8_has_fp16_weight: False
113
+ - bnb_4bit_quant_type: nf4
114
+ - bnb_4bit_use_double_quant: False
115
+ - bnb_4bit_compute_dtype: float16
116
+
117
+ The following `bitsandbytes` quantization config was used during training:
118
+ - load_in_8bit: False
119
+ - load_in_4bit: True
120
+ - llm_int8_threshold: 6.0
121
+ - llm_int8_skip_modules: None
122
+ - llm_int8_enable_fp32_cpu_offload: False
123
+ - llm_int8_has_fp16_weight: False
124
+ - bnb_4bit_quant_type: nf4
125
+ - bnb_4bit_use_double_quant: False
126
+ - bnb_4bit_compute_dtype: float16
127
+
128
+ The following `bitsandbytes` quantization config was used during training:
129
+ - load_in_8bit: False
130
+ - load_in_4bit: True
131
+ - llm_int8_threshold: 6.0
132
+ - llm_int8_skip_modules: None
133
+ - llm_int8_enable_fp32_cpu_offload: False
134
+ - llm_int8_has_fp16_weight: False
135
+ - bnb_4bit_quant_type: nf4
136
+ - bnb_4bit_use_double_quant: False
137
+ - bnb_4bit_compute_dtype: float16
138
+
139
  The following `bitsandbytes` quantization config was used during training:
140
  - load_in_8bit: False
141
  - load_in_4bit: True
 
159
  - bnb_4bit_compute_dtype: float16
160
  ### Framework versions
161
 
162
+ - PEFT 0.4.0
163
+ - PEFT 0.4.0
164
+ - PEFT 0.4.0
165
+ - PEFT 0.4.0
166
+ - PEFT 0.4.0
167
+ - PEFT 0.4.0
168
+ - PEFT 0.4.0
169
+ - PEFT 0.4.0
170
+ - PEFT 0.4.0
171
+ - PEFT 0.4.0
172
+ - PEFT 0.4.0
173
+ - PEFT 0.4.0
174
  - PEFT 0.4.0
175
 
176
  - PEFT 0.4.0
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f6439bedb6fe0b05881342827d8bc1549cb3014c44a0a2cdf1815c1ca89e7adb
3
  size 134263757
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:039afdfaf65619c4e52d8e4db5420a19c262b57e99a7ea266ed9f6426db2f457
3
  size 134263757