|
--- |
|
base_model: poom-sci/WangchanBERTa-finetuned-sentiment |
|
datasets: |
|
- pythainlp/wisesight_sentiment |
|
language: |
|
- th |
|
library_name: transformers |
|
license: apache-2.0 |
|
pipeline_tag: text-classification |
|
tags: |
|
- generated_from_trainer |
|
model-index: |
|
- name: sentiment-thai-text-model |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# sentiment-thai-text-model |
|
|
|
This model is a fine-tuned version of [poom-sci/WangchanBERTa-finetuned-sentiment](https://huggingface.co/poom-sci/WangchanBERTa-finetuned-sentiment) on an pythainlp/wisesight_sentiment. |
|
|
|
## Model description |
|
|
|
This model is a fine-tuned version of poom-sci/WangchanBERTa-finetuned-sentiment, specifically tailored for sentiment analysis on Thai-language texts. The fine-tuning was performed to improve performance on a custom Thai dataset for sentiment classification. The model is based on WangchanBERTa, a powerful transformer-based language model developed for Thai by the National Electronics and Computer Technology Center (NECTEC) in Thailand. |
|
|
|
## Intended uses & limitations |
|
|
|
This model is designed to perform sentiment analysis, categorizing input text into three classes: positive, neutral, and negative. It can be used in a variety of natural language processing (NLP) applications such as: |
|
|
|
Social media sentiment analysis |
|
Product or service reviews sentiment classification |
|
Customer feedback processing |
|
|
|
Limitations: |
|
Language: The model is specialized for Thai text and may not perform well with other languages. |
|
Generalization: The model's performance depends on the quality and diversity of the dataset used for fine-tuning. It may not generalize well to domains that differ significantly from the training data. |
|
Ambiguity: Handling of highly ambiguous or sarcastic sentences may still be challenging. |
|
|
|
## Training and evaluation data |
|
|
|
The model was fine-tuned on a sentiment classification dataset composed of Thai-language text. The dataset includes sentences and texts from multiple domains, such as social media, product reviews, and general user feedback, labeled into three categories: |
|
|
|
Positive: Indicates that the text expresses positive sentiment. |
|
Neutral: Indicates that the text is neutral or objective in sentiment. |
|
Negative: Indicates that the text expresses negative sentiment. |
|
More details on the dataset used can be provided upon request. |
|
|
|
## Training procedure |
|
|
|
The model was trained using the following hyperparameters: |
|
|
|
Learning rate: 2e-05 |
|
Batch size: 32 for both training and evaluation |
|
Seed: 42 (for reproducibility) |
|
Optimizer: Adam (with betas=(0.9, 0.999) and epsilon=1e-08) |
|
Scheduler: Linear learning rate scheduler |
|
Number of epochs: 2 |
|
The training used a combination of cross-entropy loss for multi-class classification and early stopping based on evaluation metrics. |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 2e-05 |
|
- train_batch_size: 32 |
|
- eval_batch_size: 32 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 2 |
|
|
|
### Framework versions |
|
|
|
- Transformers 4.44.2 |
|
- Pytorch 2.4.1+cu121 |
|
- Datasets 3.0.1 |
|
- Tokenizers 0.19.1 |