File size: 565 Bytes
91f1307
 
 
 
 
 
 
 
06ed7b7
 
 
 
 
6a77b68
 
 
 
61afed1
6a77b68
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
---
license: mit
language:
- th
pipeline_tag: text-generation
tags:
- instruction-finetuning
library_name: adapter-transformers
datasets:
- iapp_wiki_qa_squad
- tatsu-lab/alpaca
- wongnai_reviews
- wisesight_sentiment
---

# 🐃🇹🇭 Buffala-LoRA-TH

Buffala-LoRA is a 7B-parameter LLaMA model finetuned to follow instructions. It is trained on the Stanford Alpaca (TH), WikiTH, Pantip and IAppQ&A dataset and makes use of the Huggingface LLaMA implementation. For more information, please visit [the project's website](https://github.com/tloen/alpaca-lora).