Stance-Tw / README.md
librarian-bot's picture
Librarian Bot: Add base_model information to model
4ca7475
|
raw
history blame
2.79 kB
metadata
language:
  - en
tags:
  - text
  - stance
  - text-classification
pipeline_tag: text-classification
widget:
  - text: >-
      user Bolsonaro is the president of Brazil. He speaks for all brazilians.
      Greta is a climate activist. Their opinions do create a balance that the
      world needs now
    example_title: example 1
  - text: >-
      user The fact is that she still doesn’t change her ways and still stays
      non environmental friendly
    example_title: example 2
  - text: user The criteria for these awards dont seem to be very high.
    example_title: example 3
base_model: j-hartmann/sentiment-roberta-large-english-3-classes
model-index:
  - name: Stance-Tw
    results:
      - task:
          type: stance-classification
          name: Text Classification
        dataset:
          name: stance
          type: stance
        metrics:
          - type: f1
            value: 75.8
          - type: accuracy
            value: 76.2

Stance-Tw

This model is a fine-tuned version of j-hartmann/sentiment-roberta-large-english-3-classes to predict 3 categories of author stance (attack, support, neutral) towards an entity mentioned in the text.

# Model usage
from transformers import pipeline

model_path = "eevvgg/Stance-Tw"
cls_task = pipeline(task = "text-classification", model = model_path, tokenizer = model_path)#,  device=0 

sequence = ['his rambling has no clear ideas behind it', 
            'That has nothing to do with medical care',
            "Turns around and shows how qualified she is because of her political career.",
            'She has very little to gain by speaking too much']
            
result = cls_task(sequence)

labels = [i['label'] for i in result]

labels # ['attack', 'neutral', 'support', 'attack']
                                        

Intended uses & limitations

Model suited for classification of stance in short text. Fine-tuned on a manually-annotated corpus of size 3.2k.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'learning_rate': 4e-5, 'decay': 0.01}

Trained for 3 epochs, mini-batch size of 8.

  • loss: 0.719

Evaluation data

It achieves the following results on the evaluation set:

  • macro f1-score: 0.758
  • weighted f1-score: 0.762
  • accuracy: 0.762

Citation

BibTeX: tba