Sentiment Analysis Model: Fine-Tuned DistilBERT
Overview
This repository contains a fine-tuned version of the distilbert-base-uncased
model, designed for sentiment analysis of tweets. The model is trained to classify the sentiment of a sentence into two categories: positive (label 0) and negative (label 1).
Model Description
The fine-tuned model utilizes the distilbert-base-uncased
architecture, trained on a dataset of GPT-3.5-generated tweets. It is designed to input a sentence and output a binary sentiment label, 0
for positive and 1
for negative.
Training Data
The model was trained on a dataset consisting of tweets generated and labeled with sentiments by GPT-3.5. Each tweet in the training set was labeled as either positive or negative to provide ground truth for training.
- Downloads last month
- 12
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.