File size: 7,777 Bytes
7bd2834 9d5fb17 8629a02 fd161f6 76f94d5 41b6d20 5b738eb 76f94d5 5b738eb 1e79b0e fd161f6 174d920 fd161f6 1e79b0e 6b75089 1e79b0e fd161f6 1e79b0e fd161f6 0f958db fd161f6 0f958db fd161f6 6b75089 0f958db fd161f6 0f958db fd161f6 0f958db fd161f6 0f958db 5ec3bc0 fd161f6 1e79b0e 8629a02 1e79b0e fd161f6 80330d3 8629a02 fd161f6 1e79b0e fd161f6 9d5fb17 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 |
---
license: mit
language:
- zh
base_model:
- joeddav/xlm-roberta-large-xnli
pipeline_tag: text-classification
tags:
- emotion
library_name: transformers
datasets:
- Johnson8187/Chinese_Multi-Emotion_Dialogue_Dataset
---
# chinese-text-emotion-classifier
Here's a model is fine-tuned based on another base model and features a smaller parameter size. For users who require faster inference speed, this model is a suitable choice.The actual performance test results are also not much different.
Model:[Chinese-Emotion-Small](https://huggingface.co/Johnson8187/Chinese-Emotion-Small)
此模型是基於另一個基座模型所調整的結果,擁有較小的參數規模。對於有推理速度需求的使用者,可以選擇此模型以達到更快速的性能表現,實際測試性能也相差不大。
模型:[Chinese-Emotion-Small](https://huggingface.co/Johnson8187/Chinese-Emotion-Small)
## 📚 Model Introduction
This model is fine-tuned based on the [joeddav/xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli) model, specializing in **Chinese text emotion analysis**.
Through fine-tuning, the model can identify the following 8 emotion labels:
- **Neutral tone**
- **Concerned tone**
- **Happy tone**
- **Angry tone**
- **Sad tone**
- **Questioning tone**
- **Surprised tone**
- **Disgusted tone**
The model is applicable to various scenarios, such as customer service emotion monitoring, social media analysis, and user feedback classification.
---
## 📚 模型簡介
本模型基於[joeddav/xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli) 模型進行微調,專注於 **中文語句情感分析**。
通過微調,模型可以識別以下 8 種情緒標籤:
- **平淡語氣**
- **關切語調**
- **開心語調**
- **憤怒語調**
- **悲傷語調**
- **疑問語調**
- **驚奇語調**
- **厭惡語調**
該模型適用於多種場景,例如客服情緒監控、社交媒體分析以及用戶反饋分類。
---
## 🚀 Quick Start
### Install Dependencies
Ensure that you have installed Hugging Face's Transformers library and PyTorch:
```bash
pip install transformers torch
```
###Load the Model
Use the following code to load the model and tokenizer, and perform emotion classification:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# 添加設備設定
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# 標籤映射字典
label_mapping = {
0: "平淡語氣",
1: "關切語調",
2: "開心語調",
3: "憤怒語調",
4: "悲傷語調",
5: "疑問語調",
6: "驚奇語調",
7: "厭惡語調"
}
def predict_emotion(text, model_path="Johnson8187/Chinese-Emotion"):
# 載入模型和分詞器
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path).to(device) # 移動模型到設備
# 將文本轉換為模型輸入格式
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True).to(device) # 移動輸入到設備
# 進行預測
with torch.no_grad():
outputs = model(**inputs)
# 取得預測結果
predicted_class = torch.argmax(outputs.logits).item()
predicted_emotion = label_mapping[predicted_class]
return predicted_emotion
if __name__ == "__main__":
# 使用範例
test_texts = [
"雖然我努力了很久,但似乎總是做不到,我感到自己一無是處。",
"你說的那些話真的讓我很困惑,完全不知道該怎麼反應。",
"這世界真的是無情,為什麼每次都要給我這樣的考驗?",
"有時候,我只希望能有一點安靜,不要再聽到這些無聊的話題。",
"每次想起那段過去,我的心還是會痛,真的無法釋懷。",
"我從來沒有想過會有這麼大的改變,現在我覺得自己完全失控了。",
"我完全沒想到你會這麼做,這讓我驚訝到無法言喻。",
"我知道我應該更堅強,但有些時候,這種情緒真的讓我快要崩潰了。"
]
for text in test_texts:
emotion = predict_emotion(text)
print(f"文本: {text}")
print(f"預測情緒: {emotion}\n")
```
---
## 🚀 快速開始
### 安裝依賴
請確保安裝了 Hugging Face 的 Transformers 庫和 PyTorch:
```bash
pip install transformers torch
```
### 加載模型
使用以下代碼加載模型和分詞器,並進行情感分類:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# 添加設備設定
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# 標籤映射字典
label_mapping = {
0: "平淡語氣",
1: "關切語調",
2: "開心語調",
3: "憤怒語調",
4: "悲傷語調",
5: "疑問語調",
6: "驚奇語調",
7: "厭惡語調"
}
def predict_emotion(text, model_path="Johnson8187/Chinese-Emotion"):
# 載入模型和分詞器
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path).to(device) # 移動模型到設備
# 將文本轉換為模型輸入格式
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True).to(device) # 移動輸入到設備
# 進行預測
with torch.no_grad():
outputs = model(**inputs)
# 取得預測結果
predicted_class = torch.argmax(outputs.logits).item()
predicted_emotion = label_mapping[predicted_class]
return predicted_emotion
if __name__ == "__main__":
# 使用範例
test_texts = [
"雖然我努力了很久,但似乎總是做不到,我感到自己一無是處。",
"你說的那些話真的讓我很困惑,完全不知道該怎麼反應。",
"這世界真的是無情,為什麼每次都要給我這樣的考驗?",
"有時候,我只希望能有一點安靜,不要再聽到這些無聊的話題。",
"每次想起那段過去,我的心還是會痛,真的無法釋懷。",
"我從來沒有想過會有這麼大的改變,現在我覺得自己完全失控了。",
"我完全沒想到你會這麼做,這讓我驚訝到無法言喻。",
"我知道我應該更堅強,但有些時候,這種情緒真的讓我快要崩潰了。"
]
for text in test_texts:
emotion = predict_emotion(text)
print(f"文本: {text}")
print(f"預測情緒: {emotion}\n")
```
---
### Dataset
- The fine-tuning dataset consists of 4,000 annotated Traditional Chinese emotion samples, covering various emotion categories to ensure the model's generalization capability in emotion classification.
- [Johnson8187/Chinese_Multi-Emotion_Dialogue_Dataset](https://huggingface.co/datasets/Johnson8187/Chinese_Multi-Emotion_Dialogue_Dataset)
### 數據集
- 微調數據來自4000個自行標註的高質量繁體中文情感語句數據,覆蓋了多種情緒類別,確保模型在情感分類上的泛化能力。
- [Johnson8187/Chinese_Multi-Emotion_Dialogue_Dataset](https://huggingface.co/datasets/Johnson8187/Chinese_Multi-Emotion_Dialogue_Dataset)
---
🌟 Contact and Feedback
If you encounter any issues while using this model, please contact:
Email: fable8043@gmail.com
Hugging Face Project Page: chinese-text-emotion-classifier
## 🌟 聯繫與反饋
如果您在使用該模型時有任何問題,請聯繫:
- 郵箱:`fable8043@gmail.com`
- Hugging Face 項目頁面:[chinese-text-emotion-classifier](https://huggingface.co/Johnson8187/chinese-text-emotion-classifier) |