Antony Kalloniatis
Update README.md
e76e619 verified
|
raw
history blame
858 Bytes
metadata
library_name: transformers
datasets:
  - kallantis/Greek-Humorous-Dataset
language:
  - el
pipeline_tag: text-classification

The model is based on XLM-RoBERTa large ("xlm-roberta-large") fine-tuned for Humor Recognition in Greek language.

Model Details

The model was pre-trained over 10 epochs on Greek Humorous Dataset #

Pre-processing details

The text needs to be pre-processed by removing all greek diacritics and punctuation and converting all letters to lowercase

Load Pretrained Model

from transformers import AutoTokenizer, XLMRobertaForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("kallantis/Humor-Recognition-Greek-XLM-R-large")
model = XLMRobertaForSequenceClassification.from_pretrained("kallantis/Humor-Recognition-Greek-XLM-R-large", num_labels=2, ignore_mismatched_sizes=True)