KallistiTMR commited on
Commit
fbc3fb7
1 Parent(s): ff7d598

Upload model

Browse files
Files changed (2) hide show
  1. README.md +36 -0
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -323,6 +323,39 @@ The following `bitsandbytes` quantization config was used during training:
323
  - bnb_4bit_use_double_quant: False
324
  - bnb_4bit_compute_dtype: float16
325
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
326
  The following `bitsandbytes` quantization config was used during training:
327
  - load_in_8bit: False
328
  - load_in_4bit: True
@@ -364,5 +397,8 @@ The following `bitsandbytes` quantization config was used during training:
364
  - PEFT 0.4.0
365
  - PEFT 0.4.0
366
  - PEFT 0.4.0
 
 
 
367
 
368
  - PEFT 0.4.0
 
323
  - bnb_4bit_use_double_quant: False
324
  - bnb_4bit_compute_dtype: float16
325
 
326
+ The following `bitsandbytes` quantization config was used during training:
327
+ - load_in_8bit: False
328
+ - load_in_4bit: True
329
+ - llm_int8_threshold: 6.0
330
+ - llm_int8_skip_modules: None
331
+ - llm_int8_enable_fp32_cpu_offload: False
332
+ - llm_int8_has_fp16_weight: False
333
+ - bnb_4bit_quant_type: nf4
334
+ - bnb_4bit_use_double_quant: False
335
+ - bnb_4bit_compute_dtype: float16
336
+
337
+ The following `bitsandbytes` quantization config was used during training:
338
+ - load_in_8bit: False
339
+ - load_in_4bit: True
340
+ - llm_int8_threshold: 6.0
341
+ - llm_int8_skip_modules: None
342
+ - llm_int8_enable_fp32_cpu_offload: False
343
+ - llm_int8_has_fp16_weight: False
344
+ - bnb_4bit_quant_type: nf4
345
+ - bnb_4bit_use_double_quant: False
346
+ - bnb_4bit_compute_dtype: float16
347
+
348
+ The following `bitsandbytes` quantization config was used during training:
349
+ - load_in_8bit: False
350
+ - load_in_4bit: True
351
+ - llm_int8_threshold: 6.0
352
+ - llm_int8_skip_modules: None
353
+ - llm_int8_enable_fp32_cpu_offload: False
354
+ - llm_int8_has_fp16_weight: False
355
+ - bnb_4bit_quant_type: nf4
356
+ - bnb_4bit_use_double_quant: False
357
+ - bnb_4bit_compute_dtype: float16
358
+
359
  The following `bitsandbytes` quantization config was used during training:
360
  - load_in_8bit: False
361
  - load_in_4bit: True
 
397
  - PEFT 0.4.0
398
  - PEFT 0.4.0
399
  - PEFT 0.4.0
400
+ - PEFT 0.4.0
401
+ - PEFT 0.4.0
402
+ - PEFT 0.4.0
403
 
404
  - PEFT 0.4.0
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:462217aea42ffb9ba7b60d59024fe2a0e7bb603a4ed106535bd1a7091dce675d
3
  size 134263757
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5937500708dd1e16d7b8227c07f312cc82a5fd332af29eed3cb669edb73de8e
3
  size 134263757