Edit model card

sentiment-thai-text-model

This model is a fine-tuned version of poom-sci/WangchanBERTa-finetuned-sentiment on an pythainlp/wisesight_sentiment.

Model description

This model is a fine-tuned version of poom-sci/WangchanBERTa-finetuned-sentiment, specifically tailored for sentiment analysis on Thai-language texts. The fine-tuning was performed to improve performance on a custom Thai dataset for sentiment classification. The model is based on WangchanBERTa, a powerful transformer-based language model developed for Thai by the National Electronics and Computer Technology Center (NECTEC) in Thailand.

Intended uses & limitations

This model is designed to perform sentiment analysis, categorizing input text into three classes: positive, neutral, and negative. It can be used in a variety of natural language processing (NLP) applications such as:

Social media sentiment analysis Product or service reviews sentiment classification Customer feedback processing

Limitations: Language: The model is specialized for Thai text and may not perform well with other languages. Generalization: The model's performance depends on the quality and diversity of the dataset used for fine-tuning. It may not generalize well to domains that differ significantly from the training data. Ambiguity: Handling of highly ambiguous or sarcastic sentences may still be challenging.

Training and evaluation data

The model was fine-tuned on a sentiment classification dataset composed of Thai-language text. The dataset includes sentences and texts from multiple domains, such as social media, product reviews, and general user feedback, labeled into three categories:

Positive: Indicates that the text expresses positive sentiment. Neutral: Indicates that the text is neutral or objective in sentiment. Negative: Indicates that the text expresses negative sentiment. More details on the dataset used can be provided upon request.

Training procedure

The model was trained using the following hyperparameters:

Learning rate: 2e-05 Batch size: 32 for both training and evaluation Seed: 42 (for reproducibility) Optimizer: Adam (with betas=(0.9, 0.999) and epsilon=1e-08) Scheduler: Linear learning rate scheduler Number of epochs: 5 The training used a combination of cross-entropy loss for multi-class classification and early stopping based on evaluation metrics.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.19.1
Downloads last month
196
Safetensors
Model size
105M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for SandboxBhh/sentiment-thai-text-model

Finetuned
(2)
this model

Dataset used to train SandboxBhh/sentiment-thai-text-model

Space using SandboxBhh/sentiment-thai-text-model 1