This model is based on mDeBERTa multilingual model ("distilbert/distilbert-base-multilingual-cased") fine-tuned for Humor Recognition in Greek language.

Model Details

The model was pre-trained over 10 epochs on Greek Humorous Dataset #

Pre-processing details

The text needs to be pre-processed by removing all greek diacritics and punctuation and converting all letters to lowercase

Load Pretrained Model

from transformers import DistilBertTokenizer, DistilBertForSequenceClassification

tokenizer = DistilBertTokenizer.from_pretrained("kallantis/Humor-Recognition-Greek-DistilBERT")
model = DistilBertForSequenceClassification.from_pretrained("kallantis/Humor-Recognition-Greek-DistilBERT", num_labels=2)
Downloads last month
7
Safetensors
Model size
135M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Kalloniatis/Humor-Recognition-Greek-DistilBERT