Upload model
Browse files- README.md +7 -0
- adapter_model.bin +1 -1
README.md
CHANGED
@@ -48,6 +48,12 @@ The following `bitsandbytes` quantization config was used during training:
|
|
48 |
- llm_int8_skip_modules: None
|
49 |
- llm_int8_enable_fp32_cpu_offload: False
|
50 |
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
The following `bitsandbytes` quantization config was used during training:
|
52 |
- load_in_8bit: True
|
53 |
- llm_int8_threshold: 6.0
|
@@ -76,6 +82,7 @@ The following hyperparameters were used during training:
|
|
76 |
- PEFT 0.5.0
|
77 |
- PEFT 0.5.0
|
78 |
- PEFT 0.5.0
|
|
|
79 |
- Transformers 4.28.1
|
80 |
- Pytorch 2.0.1+cu117
|
81 |
- Datasets 2.13.0
|
|
|
48 |
- llm_int8_skip_modules: None
|
49 |
- llm_int8_enable_fp32_cpu_offload: False
|
50 |
|
51 |
+
The following `bitsandbytes` quantization config was used during training:
|
52 |
+
- load_in_8bit: True
|
53 |
+
- llm_int8_threshold: 6.0
|
54 |
+
- llm_int8_skip_modules: None
|
55 |
+
- llm_int8_enable_fp32_cpu_offload: False
|
56 |
+
|
57 |
The following `bitsandbytes` quantization config was used during training:
|
58 |
- load_in_8bit: True
|
59 |
- llm_int8_threshold: 6.0
|
|
|
82 |
- PEFT 0.5.0
|
83 |
- PEFT 0.5.0
|
84 |
- PEFT 0.5.0
|
85 |
+
- PEFT 0.5.0
|
86 |
- Transformers 4.28.1
|
87 |
- Pytorch 2.0.1+cu117
|
88 |
- Datasets 2.13.0
|
adapter_model.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 37854861
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bba335e615eea46edaa82a15de1b71d816891db3ac1665ef58df87f139892765
|
3 |
size 37854861
|