nielsbantilan commited on
Commit
0bcc593
1 Parent(s): dce112b

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. README.md +18 -13
  2. adapter_model.bin +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -1,16 +1,21 @@
1
  ---
2
- language:
3
- - en
4
- license: apache-2.0
5
- tags:
6
- - pytorch
7
- - causal-lm
8
- - llama2
9
- - code llama
10
- - fine-tuning
11
- - flyte llama
12
- - flyte repo dataset
13
-
14
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
- # FlyteLlama-v0-7b-hf fine-tuned on Flyte repos
 
1
  ---
2
+ library_name: peft
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
+ ## Training procedure
5
+
6
+
7
+ The following `bitsandbytes` quantization config was used during training:
8
+ - quant_method: bitsandbytes
9
+ - load_in_8bit: False
10
+ - load_in_4bit: True
11
+ - llm_int8_threshold: 6.0
12
+ - llm_int8_skip_modules: None
13
+ - llm_int8_enable_fp32_cpu_offload: True
14
+ - llm_int8_has_fp16_weight: False
15
+ - bnb_4bit_quant_type: nf4
16
+ - bnb_4bit_use_double_quant: True
17
+ - bnb_4bit_compute_dtype: bfloat16
18
+ ### Framework versions
19
+
20
 
21
+ - PEFT 0.5.0
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7ab78716caf1e4202e5116db608efb1b610c3b05fc7dd24c311b8fdd4f3b30c2
3
  size 16822989
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13ab2458b4b510c4bb05f4c90139aa5f08f4a6af243fa6db675bd7f83087c211
3
  size 16822989
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:96e2b1b1ebc290c407aaea372c88594148df65a6451edabadf2e7a423d8c00a0
3
  size 4027
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed1b340b952d59862cdbd135ea1f584d06e80a19feb1b43cbc6b3ab87f32bde3
3
  size 4027