IliyanGochev
commited on
Commit
·
a7cca66
1
Parent(s):
849bd72
Training in progress epoch 16
Browse files
README.md
CHANGED
@@ -306,6 +306,30 @@ The following `bitsandbytes` quantization config was used during training:
|
|
306 |
- bnb_4bit_use_double_quant: False
|
307 |
- bnb_4bit_compute_dtype: float32
|
308 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
309 |
The following `bitsandbytes` quantization config was used during training:
|
310 |
- quant_method: bitsandbytes
|
311 |
- load_in_8bit: True
|
@@ -357,6 +381,8 @@ The following `bitsandbytes` quantization config was used during training:
|
|
357 |
- PEFT 0.5.0
|
358 |
- PEFT 0.5.0
|
359 |
- PEFT 0.5.0
|
|
|
|
|
360 |
|
361 |
- PEFT 0.5.0.dev0
|
362 |
`bitsandbytes` quantization config was used during training:
|
|
|
306 |
- bnb_4bit_use_double_quant: False
|
307 |
- bnb_4bit_compute_dtype: float32
|
308 |
|
309 |
+
The following `bitsandbytes` quantization config was used during training:
|
310 |
+
- quant_method: bitsandbytes
|
311 |
+
- load_in_8bit: True
|
312 |
+
- load_in_4bit: False
|
313 |
+
- llm_int8_threshold: 6.0
|
314 |
+
- llm_int8_skip_modules: None
|
315 |
+
- llm_int8_enable_fp32_cpu_offload: False
|
316 |
+
- llm_int8_has_fp16_weight: False
|
317 |
+
- bnb_4bit_quant_type: fp4
|
318 |
+
- bnb_4bit_use_double_quant: False
|
319 |
+
- bnb_4bit_compute_dtype: float32
|
320 |
+
|
321 |
+
The following `bitsandbytes` quantization config was used during training:
|
322 |
+
- quant_method: bitsandbytes
|
323 |
+
- load_in_8bit: True
|
324 |
+
- load_in_4bit: False
|
325 |
+
- llm_int8_threshold: 6.0
|
326 |
+
- llm_int8_skip_modules: None
|
327 |
+
- llm_int8_enable_fp32_cpu_offload: False
|
328 |
+
- llm_int8_has_fp16_weight: False
|
329 |
+
- bnb_4bit_quant_type: fp4
|
330 |
+
- bnb_4bit_use_double_quant: False
|
331 |
+
- bnb_4bit_compute_dtype: float32
|
332 |
+
|
333 |
The following `bitsandbytes` quantization config was used during training:
|
334 |
- quant_method: bitsandbytes
|
335 |
- load_in_8bit: True
|
|
|
381 |
- PEFT 0.5.0
|
382 |
- PEFT 0.5.0
|
383 |
- PEFT 0.5.0
|
384 |
+
- PEFT 0.5.0
|
385 |
+
- PEFT 0.5.0
|
386 |
|
387 |
- PEFT 0.5.0.dev0
|
388 |
`bitsandbytes` quantization config was used during training:
|
Whisper PEFT Fine-Tuning/events.out.tfevents.1696436338.MLbox.300106.0
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d979cf3bd3a58537259aa3827f126c4ad52cf6bb3914a1c1945d0f6a7f90f7f1
|
3 |
+
size 7135
|
adapter_model.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 38697637
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:72ea279c8990d477e17243cc234c42746138eadec01c69d5f530abbb5285435b
|
3 |
size 38697637
|