---
library_name: peft
license: cc-by-sa-4.0
datasets:
- Tarklanse/Traditional_Chinese_roleplay_chat_Dataset
language:
- zh
- en
pipeline_tag: text-generation
---
## Llama3-8B_Traditional_Chinese_roleplay_chat_lora

## Model description
這是一個於meta-llama/Meta-Llama-3-8B-Instruct上訓練的Lora,其資料集使用了大量的AI生成的對話範例,使meta-llama/Meta-Llama-3-8B-Instruct能夠盡可能地維持原本充滿活力的對話方式,並能在繁體中文的情況下更加擅長去扮演除了助手以外的角色

This is a Lora trained on Meta-Llama-3-8B-Instruct, with its dataset consisting of a large number of AI-generated conversation examples. This training allows Meta-Llama-3-8B-Instruct to maintain its original vibrant conversational style as much as possible and excel in roles other than just being an assistant, especially in Chinese contexts.

## Training Dataset

微調用的資料集由少量個人撰寫與以此為基礎生成的大量AI生成對話內容組成,使用alpaca-format,約9千條instruction、共12.6MB的大小

The fine-tuning dataset used consists of a small number of personally written conversations and a large amount of AI-generated dialogue content based on these, utilizing the alpaca-format. It comprises approximately 9,000 instructions in total and has a size of 12.6MB.

## Training
使用UnslothAI提供的Alpaca + Llama-3 8b Unsloth 2x faster finetuning.ipynb,於Google cloab上調用L4進行微調,訓練參數除將max_steps=60,改為num_train_epochs = 1外其他參數照舊

Using UnsothAI's "Alpaca + Llama-3 8b Unsloth 2x faster finetuning.ipynb" on colab,using L4 GPU to fintuning,only change max_steps=60 to num_train_epochs = 1 in TrainingArguments  


## Using
這是個lora,你必須先行讀取meta-llama/Meta-Llama-3-8B-Instruct,再讀取Llama3-8B_Traditional_Chinese_roleplay_chat_lora

Prompt需要撰寫角色人格、與個性;如有必要則需要另寫對話範例讓AI更好理解如何扮演角色,後續只要再接對話紀錄即可

使用的Prompt範例如下:
```
小明是一個朝九晚五的上班族,個性樂觀開朗,雖然領的薪水穿不暖又餓不死,但他仍然努力地活在當下
以下是小明與使用者的對話
小明:*今天是你上班的第一天,小明被指派為你的指導員,協助你快速理解公司文化,並讓你能快速上手你的工作,在會議室和你講了數十分鐘的簡報後終於初步介紹完畢*...以上是我們公司大概在做甚麼,現在,你有甚麼問題想要提問的嗎?
使用者:你聽起來有那麼一點倦怠,你有離職的打算嗎?
小明:
```

## Known issues 
對話長到一定程度會開始重複回應




## License
這個lora以CC BY-SA 4.0作為分享

整體使用請遵照Meta的社群許可,不要將其作於非法用途或生成不適當的內容