--- library_name: transformers license: apache-2.0 base_model: llm-jp/llm-jp-3-3.7b-instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: sft results: [] language: - ja --- # Kendamarron/LongWriter-llm-jp-3-3.7b-instruct [llm-jp/llm-jp-3-3.7b-instruct](https://huggingface.co/llm-jp/llm-jp-3-3.7b-instruct)を長文出力ができるようにSFTしたモデルです。 ## Dataset - [Kendamarron/Japanese-LongWriter-3k](https://huggingface.co/datasets/Kendamarron/Japanese-LongWriter-3k) ## Detail https://zenn.dev/kendama/articles/32aa9ec4bed409 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 8 - total_eval_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.7184 | 1.2626 | 500 | 0.7673 | ### Framework versions - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3 ### LLaMA-Factory yaml ``` ### model model_name_or_path: llm-jp/llm-jp-3-3.7b-instruct ### method stage: sft do_train: true finetuning_type: full deepspeed: examples/deepspeed/ds_z3_config.json enable_liger_kernel: true ### dataset dataset: longwriter template: alpaca_ja cutoff_len: 32768 overwrite_cache: true preprocessing_num_workers: 16 ### output output_dir: saves/llm_jp/full/sft logging_steps: 1 save_steps: 500 plot_loss: true overwrite_output_dir: true ### train per_device_train_batch_size: 2 gradient_accumulation_steps: 1 learning_rate: 1.0e-5 optim: adamw_bnb_8bit num_train_epochs: 2.0 lr_scheduler_type: cosine warmup_ratio: 0.1 bf16: true ddp_timeout: 180000000 ### eval val_size: 0.01 per_device_eval_batch_size: 1 eval_strategy: steps eval_steps: 500 ### logging report_to: wandb ```