IliyanGochev commited on
Commit
212dbcc
1 Parent(s): 3b05fa1

End of training

Browse files
Files changed (3) hide show
  1. README.md +26 -0
  2. adapter_model.bin +1 -1
  3. all_results.json +1 -0
README.md CHANGED
@@ -378,6 +378,30 @@ The following `bitsandbytes` quantization config was used during training:
378
  - bnb_4bit_use_double_quant: False
379
  - bnb_4bit_compute_dtype: float32
380
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
381
  The following `bitsandbytes` quantization config was used during training:
382
  - quant_method: bitsandbytes
383
  - load_in_8bit: True
@@ -435,6 +459,8 @@ The following `bitsandbytes` quantization config was used during training:
435
  - PEFT 0.5.0
436
  - PEFT 0.5.0
437
  - PEFT 0.5.0
 
 
438
 
439
  - PEFT 0.5.0.dev0
440
  `bitsandbytes` quantization config was used during training:
 
378
  - bnb_4bit_use_double_quant: False
379
  - bnb_4bit_compute_dtype: float32
380
 
381
+ The following `bitsandbytes` quantization config was used during training:
382
+ - quant_method: bitsandbytes
383
+ - load_in_8bit: True
384
+ - load_in_4bit: False
385
+ - llm_int8_threshold: 6.0
386
+ - llm_int8_skip_modules: None
387
+ - llm_int8_enable_fp32_cpu_offload: False
388
+ - llm_int8_has_fp16_weight: False
389
+ - bnb_4bit_quant_type: fp4
390
+ - bnb_4bit_use_double_quant: False
391
+ - bnb_4bit_compute_dtype: float32
392
+
393
+ The following `bitsandbytes` quantization config was used during training:
394
+ - quant_method: bitsandbytes
395
+ - load_in_8bit: True
396
+ - load_in_4bit: False
397
+ - llm_int8_threshold: 6.0
398
+ - llm_int8_skip_modules: None
399
+ - llm_int8_enable_fp32_cpu_offload: False
400
+ - llm_int8_has_fp16_weight: False
401
+ - bnb_4bit_quant_type: fp4
402
+ - bnb_4bit_use_double_quant: False
403
+ - bnb_4bit_compute_dtype: float32
404
+
405
  The following `bitsandbytes` quantization config was used during training:
406
  - quant_method: bitsandbytes
407
  - load_in_8bit: True
 
459
  - PEFT 0.5.0
460
  - PEFT 0.5.0
461
  - PEFT 0.5.0
462
+ - PEFT 0.5.0
463
+ - PEFT 0.5.0
464
 
465
  - PEFT 0.5.0.dev0
466
  `bitsandbytes` quantization config was used during training:
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:526734d178d7f1a76d2c062877ad0c52d62dc52317c86681650af4250c62f213
3
  size 38697637
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e5dd6174d8b5524ec2c4a590bbbae741941169a385eba7f6dc02e3eea74d898
3
  size 38697637
all_results.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"eval/wer": 19.44762340733959, "eval/normalized_wer": 13.298817711150607}