File size: 2,791 Bytes
9ed93d8 4ca7475 9ed93d8 4ca7475 dab8b65 4ca7475 dab8b65 4ca7475 dab8b65 4ca7475 9ed93d8 3f08d85 9ed93d8 4ca7475 9ed93d8 4ca7475 9ed93d8 4ca7475 9ed93d8 3f08d85 9ed93d8 7b74012 9ed93d8 0b531b7 fbd0912 9ed93d8 3f08d85 9ed93d8 2cf8d0d 9ed93d8 2cf8d0d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 |
---
language:
- en
tags:
- text
- stance
- text-classification
pipeline_tag: text-classification
widget:
- text: user Bolsonaro is the president of Brazil. He speaks for all brazilians. Greta
is a climate activist. Their opinions do create a balance that the world needs
now
example_title: example 1
- text: user The fact is that she still doesn’t change her ways and still stays non
environmental friendly
example_title: example 2
- text: user The criteria for these awards dont seem to be very high.
example_title: example 3
base_model: j-hartmann/sentiment-roberta-large-english-3-classes
model-index:
- name: Stance-Tw
results:
- task:
type: stance-classification
name: Text Classification
dataset:
name: stance
type: stance
metrics:
- type: f1
value: 75.8
- type: accuracy
value: 76.2
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Stance-Tw
This model is a fine-tuned version of [j-hartmann/sentiment-roberta-large-english-3-classes](https://huggingface.co/j-hartmann/sentiment-roberta-large-english-3-classes) to predict 3 categories of author stance (attack, support, neutral) towards an entity mentioned in the text.
- training procedure available in [Colab notebook](https://colab.research.google.com/drive/12DsO5dNaQI3kFO7ohOHZn4EWNewFy2jm?usp=sharing)
- result of a collaboration with [Laboratory of The New Ethos](https://newethos.org/laboratory/)
```
# Model usage
from transformers import pipeline
model_path = "eevvgg/Stance-Tw"
cls_task = pipeline(task = "text-classification", model = model_path, tokenizer = model_path)#, device=0
sequence = ['his rambling has no clear ideas behind it',
'That has nothing to do with medical care',
"Turns around and shows how qualified she is because of her political career.",
'She has very little to gain by speaking too much']
result = cls_task(sequence)
labels = [i['label'] for i in result]
labels # ['attack', 'neutral', 'support', 'attack']
```
## Intended uses & limitations
Model suited for classification of stance in short text. Fine-tuned on a manually-annotated corpus of size 3.2k.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 4e-5, 'decay': 0.01}
Trained for 3 epochs, mini-batch size of 8.
- loss: 0.719
## Evaluation data
It achieves the following results on the evaluation set:
- macro f1-score: 0.758
- weighted f1-score: 0.762
- accuracy: 0.762
## Citation
**BibTeX**: tba
|