Phoenixsymbol commited on
Commit
a06aa1b
·
1 Parent(s): 986ee59

Upload model

Browse files
Files changed (1) hide show
  1. README.md +24 -0
README.md CHANGED
@@ -37,6 +37,28 @@ The following `bitsandbytes` quantization config was used during training:
37
  - bnb_4bit_use_double_quant: True
38
  - bnb_4bit_compute_dtype: bfloat16
39
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  The following `bitsandbytes` quantization config was used during training:
41
  - load_in_8bit: False
42
  - load_in_4bit: True
@@ -49,6 +71,8 @@ The following `bitsandbytes` quantization config was used during training:
49
  - bnb_4bit_compute_dtype: bfloat16
50
  ### Framework versions
51
 
 
 
52
  - PEFT 0.5.0.dev0
53
  - PEFT 0.5.0.dev0
54
  - PEFT 0.5.0.dev0
 
37
  - bnb_4bit_use_double_quant: True
38
  - bnb_4bit_compute_dtype: bfloat16
39
 
40
+ The following `bitsandbytes` quantization config was used during training:
41
+ - load_in_8bit: False
42
+ - load_in_4bit: True
43
+ - llm_int8_threshold: 6.0
44
+ - llm_int8_skip_modules: None
45
+ - llm_int8_enable_fp32_cpu_offload: False
46
+ - llm_int8_has_fp16_weight: False
47
+ - bnb_4bit_quant_type: nf4
48
+ - bnb_4bit_use_double_quant: True
49
+ - bnb_4bit_compute_dtype: bfloat16
50
+
51
+ The following `bitsandbytes` quantization config was used during training:
52
+ - load_in_8bit: False
53
+ - load_in_4bit: True
54
+ - llm_int8_threshold: 6.0
55
+ - llm_int8_skip_modules: None
56
+ - llm_int8_enable_fp32_cpu_offload: False
57
+ - llm_int8_has_fp16_weight: False
58
+ - bnb_4bit_quant_type: nf4
59
+ - bnb_4bit_use_double_quant: True
60
+ - bnb_4bit_compute_dtype: bfloat16
61
+
62
  The following `bitsandbytes` quantization config was used during training:
63
  - load_in_8bit: False
64
  - load_in_4bit: True
 
71
  - bnb_4bit_compute_dtype: bfloat16
72
  ### Framework versions
73
 
74
+ - PEFT 0.5.0.dev0
75
+ - PEFT 0.5.0.dev0
76
  - PEFT 0.5.0.dev0
77
  - PEFT 0.5.0.dev0
78
  - PEFT 0.5.0.dev0