license: mit | |
language: | |
- th | |
pipeline_tag: text-generation | |
tags: | |
- instruction-finetuning | |
library_name: adapter-transformers | |
datasets: | |
- iapp_wiki_qa_squad | |
- tatsu-lab/alpaca | |
- wongnai_reviews | |
- wisesight_sentiment | |
# ππΉπ Buffala-LoRA-TH | |
Buffala-LoRA is a 7B-parameter LLaMA model finetuned to follow instructions. It is trained on the Stanford Alpaca (TH), WikiTH, Pantip and IAppQ&A dataset and makes use of the Huggingface LLaMA implementation. For more information, please visit [the project's website](https://github.com/tloen/alpaca-lora)." | |