Thaweewat's picture
Update README.md
61afed1
|
raw
history blame
565 Bytes
metadata
license: mit
language:
  - th
pipeline_tag: text-generation
tags:
  - instruction-finetuning
library_name: adapter-transformers
datasets:
  - iapp_wiki_qa_squad
  - tatsu-lab/alpaca
  - wongnai_reviews
  - wisesight_sentiment

πŸƒπŸ‡ΉπŸ‡­ Buffala-LoRA-TH

Buffala-LoRA is a 7B-parameter LLaMA model finetuned to follow instructions. It is trained on the Stanford Alpaca (TH), WikiTH, Pantip and IAppQ&A dataset and makes use of the Huggingface LLaMA implementation. For more information, please visit the project's website.