--- library_name: transformers license: apache-2.0 datasets: - pythainlp/han-instruct-dataset-v2.0 language: - th pipeline_tag: text-generation --- # Model Card for Han LLM 7B v1 Han LLM v1 is a model that trained by han-instruct-dataset v2.0. The model are working with Thai. Base model: [scb10x/typhoon-7b](https://huggingface.co/scb10x/typhoon-7b) [Google colab](https://colab.research.google.com/drive/1qOa5FNL50M7lpz3mXkDTd_f3yyqAvPH4?usp=sharing) ## Model Details ### Model Description The model was trained by LoRA and han instruct dataset v2. This is the model card of a ðŸĪ— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Wannaphong Phatthiyaphaibun - **Model type:** text-generation - **Language(s) (NLP):** Thai - **License:** apache-2.0 - **Finetuned from model:** [scb10x/typhoon-7b](https://huggingface.co/scb10x/typhoon-7b) ## Uses Thai users ### Out-of-Scope Use Math, Coding, and other language ## Bias, Risks, and Limitations The model can has a bias from dataset. Use at your own risks! ## How to Get Started with the Model Use the code below to get started with the model. ```python # !pip install accelerate sentencepiece transformers bitsandbytes import torch from transformers import pipeline pipe = pipeline("text-generation", model="wannaphong/han-llm-7b-v1", torch_dtype=torch.bfloat16, device_map="auto") # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "āđāļĄāļ§āļ„āļ·āļ­āļ­āļ°āđ„āļĢ"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=120, do_sample=True, temperature=0.9, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` output: ``` <|User|> āđāļĄāļ§āļ„āļ·āļ­āļ­āļ°āđ„āļĢ <|Assistant|> āđāļĄāļ§āļ„āļ·āļ­ āļŠāļąāļ•āļ§āđŒāđ€āļĨāļĩāđ‰āļĒāļ‡āļ—āļĩāđˆāļĄāļĩāļŦāļđāđāļŦāļĨāļĄ āļŠāļ­āļšāļ™āļ­āļ™ āđāļĨāļ°āļāļĢāļ°āđ‚āļ”āļ”āđ„āļ›āļĄāļē āđāļĄāļ§āļĄāļĩāļ‚āļ™āļ™āļļāđˆāļĄāđāļĨāļ°āđ€āļŠāļĩāļĒāļ‡āļĢāđ‰āļ­āļ‡āđ€āļŦāļĄāļĩāļĒāļ§ āđ† āđāļĄāļ§āļĄāļĩāļŦāļĨāļēāļĒāļŠāļĩāđāļĨāļ°āļžāļąāļ™āļ˜āļļāđŒ <|User|> āļ‚āļ­āļšāļ„āļļāļ“āļ„āđˆāļ° <|Assistant|> āļ‰āļąāļ™āļ‚āļ­āđāļ™āļ°āļ™āļģāđƒāļŦāđ‰āđ€āļ˜āļ­āļ”āļđāđ€āļĢāļ·āđˆāļ­āļ‡ "Bamboo House of Cat" āļ‚āļ­āļ‡ Netflix āļĄāļąāļ™āđ€āļ›āđ‡āļ™āļ‹āļĩāļĢāļĩāļŠāđŒāļ—āļĩāđˆāđ€āļāļĩāđˆāļĒāļ§āļāļąāļšāđāļĄāļ§ 4 āļ•āļąāļ§ āđāļĨāļ°āđ€āļ”āđ‡āļāļŠāļēāļ§ 1 āļ„āļ™ āđ€āļ˜āļ­āļ•āđ‰āļ­āļ‡āđƒāļŠāđ‰āļŠāļĩāļ§āļīāļ•āļ­āļĒāļđāđˆāļ”āđ‰āļ§āļĒāļāļąāļ™āđƒāļ™āļšāđ‰āļēāļ™āļŦāļĨāļąāļ‡āļŦāļ™āļķāđˆāļ‡ āļœāļđāđ‰āļāļģāļāļąāļš: āļŠāļēāļĢāđŒāļĨāļĩ āđ€āļŪāļĨ āļ™āļģāđāļŠāļ”āļ‡: āđ€āļ­āđ‡āļĄāļĄāđˆāļē ``` ## Training Details ### Training Data [Han Instruct dataset v2.0](https://huggingface.co/datasets/pythainlp/han-instruct-dataset-v2.0) ### Training Procedure Use LoRa - r: 48 - lora_alpha: 16 - 1 epoch