Model Description
- NSMC ๋ฐ์ดํฐ์ ๋ํด meta-llama/Llama-2-7b-chat-hf ๋ฏธ์ธํ๋
- ์ํ ๋ฆฌ๋ทฐ ํ ์คํธ๋ฅผ ํ๋กฌํํธ์ ํฌํจํ์ฌ ๋ชจ๋ธ์ ์ ๋ ฅํ๋ฉด '๊ธ์ ' ๋๋ '๋ถ์ '์ด๋ผ๊ณ ์์ธก ํ ์คํธ๋ฅผ ์ง์ ์์ฑ
- NSMC์ train ์คํ๋ฆฟ ์์ 2,000๊ฐ ์ด์์ ์ํ์ ํ์ต์ ์ฌ์ฉ
- test ์คํ๋ฆฟ ์์ 1,000๊ฐ์ ์ํ๋ง ์ธก์
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08,
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_args.logging_steps: 100
- training_args.max_steps : 1600
- trainable params: 19,988,480 || all params: 6,758,404,096 || trainable%: 0.2957573965106688
Training Results
TrainOutput(global_step=1600, training_loss=0.7892872190475464, metrics={'train_runtime': 5825.2445, 'train_samples_per_second': 0.549, 'train_steps_per_second': 0.275, 'total_flos': 6.51493254365184e+16, 'train_loss': 0.7892872190475464, 'epoch': 1.6})
Accuracy
Llama2: ์ ํ๋ 0.52
TP | TN | |
---|---|---|
PP | 192 | 168 |
PN | 317 | 324 |
์ ํ๋๋ฅผ ํฅ์์ํค๊ธฐ ์ํด ์ฌ๋ฌ ์ฐจ๋ก ๋ ธ๋ ฅ์ ํด๋ณด์์ง๋ง ๋ฐ๋ณตํด์ ์ค๋ฅ๊ฐ ๋ฐ์ํ์์ต๋๋ค.
Model Card Authors
cxoijve
- Downloads last month
- 8
Model tree for cxoijve/Llama-2-7b-chat-hf
Base model
meta-llama/Llama-2-7b-chat-hf