euneeei commited on
Commit
7a8ac2c
Β·
1 Parent(s): 2069ec6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -47
README.md CHANGED
@@ -12,44 +12,9 @@ euneeei/hw-midm-7B-nsmc
12
  ### Training Data
13
 
14
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
15
- ν•œκ΅­μ–΄λ‘œ 된 넀이버 μ˜ν™” 리뷰 λ°μ΄ν„°μ…‹μž…λ‹ˆλ‹€.
16
-
17
-
18
- #### Training Hyperparameters
19
-
20
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
21
- training_args: TrainingArguments = field(
22
- default_factory=lambda: TrainingArguments(
23
- output_dir="./results",
24
- max_steps=500,
25
- logging_steps=20,
26
- # save_steps=10,
27
- per_device_train_batch_size=1,
28
- per_device_eval_batch_size=1,
29
- gradient_accumulation_steps=2,
30
- gradient_checkpointing=False,
31
- group_by_length=False,
32
- # learning_rate=1e-4,
33
- learning_rate = 2e-4,
34
- lr_scheduler_type="cosine",
35
- warmup_steps=100,
36
- warmup_ratio=0.03,
37
- max_grad_norm=0.3,
38
- weight_decay=0.05,
39
- save_total_limit=20,
40
- save_strategy="epoch",
41
- num_train_epochs=1,
42
- optim="paged_adamw_32bit",
43
- fp16=True,
44
- remove_unused_columns=False,
45
- report_to="wandb",
46
- )
47
-
48
- #### Speeds, Sizes, Times [optional]
49
-
50
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
51
 
52
- [More Information Needed]
53
 
54
  ## Evaluation
55
 
@@ -79,22 +44,23 @@ training_args: TrainingArguments = field(
79
  <!-- These are the evaluation metrics being used, ideally with a description of why. -->
80
 
81
 
82
- precision recall f1-score support
83
- negative 0.87 0.95 091 492
84
- positive 0.94 0.87 0.90 508
85
- accuracy 0.91 1000
86
- macro avg 0.91 0.91 0.91 1000
87
- weighted avg 0.91 0.91 0.91 1000
 
88
 
89
- confusion metrics
90
 
91
- [[ 466, 26 ]
92
- [68, 440]]
93
 
94
  [More Information Needed]
95
 
96
  ### Results
97
- 정확도 0.51 -> 0.91둜 λ†’μ•„μ‘ŒμŠ΅λ‹ˆλ‹€
98
  [More Information Needed]
99
 
100
 
 
12
  ### Training Data
13
 
14
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
15
+ - ν•œκ΅­μ–΄λ‘œ 된 넀이버 μ˜ν™” 리뷰 λ°μ΄ν„°μ…‹μž…λ‹ˆλ‹€.
16
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
 
18
 
19
  ## Evaluation
20
 
 
44
  <!-- These are the evaluation metrics being used, ideally with a description of why. -->
45
 
46
 
47
+ | | precision | recall | f1-score | support|
48
+ |----|----|----|-------|------|
49
+ negative| 0.87 | 0.95 | 091 | 492
50
+ positive | 0.94 | 0.87 | 0.90 | 508
51
+ accuracy | | | 0.91 | 1000
52
+ macro avg | 0.91 | 0.91 | 0.91 | 1000
53
+ weighted avg | 0.91 | 0.91 | 0.91 | 1000
54
 
55
+ - ### confusion metrics
56
 
57
+ ### [[ 466, 26 ]
58
+ ### [68, 440]]
59
 
60
  [More Information Needed]
61
 
62
  ### Results
63
+ - ###정확도 0.51 -> 0.91둜 λ†’μ•„μ‘ŒμŠ΅λ‹ˆλ‹€
64
  [More Information Needed]
65
 
66