metadata
base_model: unsloth/Qwen2.5-7B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
datasets:
- airesearch/WangchanThaiInstruct
Dataset
This model finetune on airesearch/WangchanThaiInstruct
23 sep 2024
Training details:
- epochs: 1
- learning rate: 2e-4
- learning rate scheduler type: linear
- Warmup ratio: 0.3
- cutoff len (i.e. context length): 2048
- global batch size: 8
- fine-tuning type: qlora
- optimizer: adamw_8bit
ps. 12 Hours from T4 Kaggle
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "Konthee/Qwen2.5-7B-ThaiInstruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id, torch_dtype="auto", device_map="auto"
)
messages = [
{"role": "user", "content": "สอนภาษาไทยหน่อย"},
]
input_ids = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)
outputs = model.generate(
input_ids,
max_new_tokens=4096,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
Uploaded model
- Developed by: Konthee
- License: apache-2.0
- Finetuned from model : unsloth/Qwen2.5-7B-Instruct-bnb-4bit
This qwen2 model was trained 2x faster with Unsloth and Huggingface's TRL library.