kor_chatGLM / README.md
librarian-bot's picture
Librarian Bot: Add base_model information to model
24fc4e5
|
raw
history blame
448 Bytes
---
library_name: peft
base_model: THUDM/chatglm-6b
---
- **WIP**
Data used : https://raw.githubusercontent.com/Beomi/KoAlpaca/main/alpaca_data.json
training_args = TrainingArguments(
"output",
fp16 =True,
gradient_accumulation_steps=1,
per_device_train_batch_size = 1,
learning_rate = 1e-4,
max_steps=3000,
logging_steps=100,
remove_unused_columns=False,
seed=0,
data_seed=0,
group_by_length=False,
)