--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: sft results: [] language: - ja datasets: - Kendamarron/OpenMathInstruct-2-ja-CoT - Kendamarron/Magpie-Tanuki-8B-CoT --- # Model [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)をCoTデータでファインチューニングすることで作成したreasoningモデルです。 学習にはQwen2.5-32B-Instruct-AWQを使って生成した合成データセットを使用しています。. - [Kendamarron/llm-jp-3-3.7b-o1-v0.1](https://huggingface.co/datasets/Kendamarron/Magpie-Tanuki-8B-CoT) - [Kendamarron/OpenMathInstruct-2-ja-CoT](https://huggingface.co/datasets/Kendamarron/OpenMathInstruct-2-ja-CoT) ## Usage ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM device = "cuda" model = AutoModelForCausalLM.from_pretrained( 'Kendamarron/Qwen2.5-7B-o1-ja-v0.1', torch_dtype=torch.bfloat16, device_map=device, ) tokenizer = AutoTokenizer.from_pretrained('Kendamarron/Qwen2.5-7B-o1-ja-v0.1') messages = [ {"role": "system", "content": "あなたは優秀で論理的なアシスタントです。まずはタグの中であなたの思考の過程を記載し、タグの中に最終的にユーザーに提供する出力を記載します。"}, {"role": "user", "content": "1から10までの整数を足すと?"} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=256, do_sample=True, top_p=0.95, top_k=40, temperature=0.7, repetition_penalty=1.1, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=2 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - total_eval_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.4.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3 ### LLama-Factory yaml ``` ### model model_name_or_path: Qwen/Qwen2.5-7B-Instruct ### method stage: sft do_train: true finetuning_type: full deepspeed: examples/deepspeed/ds_z3_config.json ### dataset dataset: cot_normal, cot_math template: qwen_ja cutoff_len: 8192 overwrite_cache: true preprocessing_num_workers: 16 ### output output_dir: saves/qwen2.5/full/sft logging_steps: 1 save_steps: 500 plot_loss: true overwrite_output_dir: true ### train per_device_train_batch_size: 2 gradient_accumulation_steps: 16 learning_rate: 1.0e-5 num_train_epochs: 2.0 lr_scheduler_type: cosine optim: adamw_bnb_8bit warmup_ratio: 0.1 bf16: true ddp_timeout: 180000000 ### eval val_size: 0.01 per_device_eval_batch_size: 1 eval_strategy: steps eval_steps: 500 ### logging report_to: wandb ```