ZeroUniqueness commited on
Commit
26a5448
1 Parent(s): e0e6b68

Started the qlora repo

Browse files
Files changed (2) hide show
  1. README.md +12 -0
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -15,6 +15,17 @@ The following `bitsandbytes` quantization config was used during training:
15
  - bnb_4bit_use_double_quant: True
16
  - bnb_4bit_compute_dtype: bfloat16
17
 
 
 
 
 
 
 
 
 
 
 
 
18
  The following `bitsandbytes` quantization config was used during training:
19
  - load_in_8bit: False
20
  - load_in_4bit: True
@@ -27,6 +38,7 @@ The following `bitsandbytes` quantization config was used during training:
27
  - bnb_4bit_compute_dtype: bfloat16
28
  ### Framework versions
29
 
 
30
  - PEFT 0.5.0.dev0
31
 
32
  - PEFT 0.5.0.dev0
 
15
  - bnb_4bit_use_double_quant: True
16
  - bnb_4bit_compute_dtype: bfloat16
17
 
18
+ The following `bitsandbytes` quantization config was used during training:
19
+ - load_in_8bit: False
20
+ - load_in_4bit: True
21
+ - llm_int8_threshold: 6.0
22
+ - llm_int8_skip_modules: None
23
+ - llm_int8_enable_fp32_cpu_offload: False
24
+ - llm_int8_has_fp16_weight: False
25
+ - bnb_4bit_quant_type: nf4
26
+ - bnb_4bit_use_double_quant: True
27
+ - bnb_4bit_compute_dtype: bfloat16
28
+
29
  The following `bitsandbytes` quantization config was used during training:
30
  - load_in_8bit: False
31
  - load_in_4bit: True
 
38
  - bnb_4bit_compute_dtype: bfloat16
39
  ### Framework versions
40
 
41
+ - PEFT 0.5.0.dev0
42
  - PEFT 0.5.0.dev0
43
 
44
  - PEFT 0.5.0.dev0
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cc708d285223390333d59171f97fd96d69f78c2d5c0ee0144b6339d6184a4216
3
  size 500897101
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cbe47759fc9c930a32e5a335669ed9e3d5576d6ab7b8ca02e45def36b4ced96
3
  size 500897101