shimwoohyeon commited on
Commit
9a2d806
โ€ข
1 Parent(s): 6cb3265

Upload model

Browse files
Files changed (3) hide show
  1. README.md +3 -28
  2. adapter_config.json +8 -4
  3. adapter_model.safetensors +2 -2
README.md CHANGED
@@ -1,34 +1,8 @@
1
  ---
2
  library_name: peft
3
- base_model: KT-AI/midm-bitext-S-7B-inst-v1
4
  ---
5
- # Accuracy
6
- ## ๊ฒฐ๊ณผ ๋ถ„์„
7
 
8
- ### ์ƒ์œ„ 1000๊ฐœ์˜ test dataset์„ ์ด์šฉํ•ด ์„ฑ๋Šฅ ํ™•์ธ
9
-
10
- | | TP | TN |
11
- |:-------------:|:-----:|:----:|
12
- | FP | 452 | 41 |
13
- | FN | 43 | 440 |
14
-
15
-
16
-
17
-
18
- - **True Positive**: 452
19
- - **True Negative**: 440
20
- - **False Positive**: 41
21
- - **False Negative**: 43
22
-
23
- - **precision**: 0.9168356997971603
24
- - **recall**: 0.9131313131313131
25
- - **f1 score**: 0.9149797570850203
26
-
27
-
28
- #### '๊ธ์ •'๊ณผ '๋ถ€์ •'์— ํฌํ•จ๋˜์ง€ ์•Š์€ ๋‹จ์–ด๋กœ ์˜ˆ์ธกํ•œ ๊ฒฝ์šฐ
29
-
30
- - **' ', '์ •'์œผ๋กœ ์˜ˆ์ธก**: ์ด 24๊ฐœ
31
-
32
  # Model Card for Model ID
33
 
34
  <!-- Provide a quick summary of what the model is/does. -->
@@ -242,4 +216,5 @@ The following `bitsandbytes` quantization config was used during training:
242
 
243
  ### Framework versions
244
 
245
- - PEFT 0.6.2
 
 
1
  ---
2
  library_name: peft
3
+ base_model: meta-llama/Llama-2-7b-chat-hf
4
  ---
 
 
5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  # Model Card for Model ID
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
 
216
 
217
  ### Framework versions
218
 
219
+
220
+ - PEFT 0.6.2
adapter_config.json CHANGED
@@ -1,7 +1,7 @@
1
  {
2
  "alpha_pattern": {},
3
  "auto_mapping": null,
4
- "base_model_name_or_path": "KT-AI/midm-bitext-S-7B-inst-v1",
5
  "bias": "none",
6
  "fan_in_fan_out": false,
7
  "inference_mode": true,
@@ -16,9 +16,13 @@
16
  "rank_pattern": {},
17
  "revision": null,
18
  "target_modules": [
19
- "c_proj",
20
- "c_attn",
21
- "c_fc"
 
 
 
 
22
  ],
23
  "task_type": "CAUSAL_LM"
24
  }
 
1
  {
2
  "alpha_pattern": {},
3
  "auto_mapping": null,
4
+ "base_model_name_or_path": "meta-llama/Llama-2-7b-chat-hf",
5
  "bias": "none",
6
  "fan_in_fan_out": false,
7
  "inference_mode": true,
 
16
  "rank_pattern": {},
17
  "revision": null,
18
  "target_modules": [
19
+ "o_proj",
20
+ "up_proj",
21
+ "q_proj",
22
+ "v_proj",
23
+ "gate_proj",
24
+ "down_proj",
25
+ "k_proj"
26
  ],
27
  "task_type": "CAUSAL_LM"
28
  }
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ec191b81e42cdc67a1cf6e8f8f62dae1321ac27fbaa0141fd09ea57aaa710482
3
- size 67010784
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f547d951b79226b6bd479eb5bcf6f89635bac809ec585f573faf84400bd3de8f
3
+ size 80013120