kor_chatGLM / README.md
a2ran's picture
Update README.md
2447bdb
---
library_name: peft
---
- **WIP**
Data used : https://raw.githubusercontent.com/Beomi/KoAlpaca/main/alpaca_data.json
training_args = TrainingArguments(
"output",
fp16 =True,
gradient_accumulation_steps=1,
per_device_train_batch_size = 1,
learning_rate = 1e-4,
max_steps=3000,
logging_steps=100,
remove_unused_columns=False,
seed=0,
data_seed=0,
group_by_length=False,
)