Aspect Based Sentiment Analysis
Collection
Models fine-tuned for Aspect-Based Sentiment Analysis (ABSA) • 4 items • Updated
This model is a fine-tuned version of NorBERT3-large, applied on the full sentence-level NorPaC_absa dataset. The model is trained on a total of 25 unique, coarse-grained aspect+sentiment labels. This model along with norbert3-fine-absa-full, represent the models which in practice will be used by NIPH/FHI, as they are trained on the full NorPaC_absa dataset.
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("ltg/norbert3-coarse-absa-full")
model = AutoModelForSequenceClassification.from_pretrained("ltg/norbert3-coarse-absa-full", trust_remote_code=True)
model.eval()
text = "fastlegen lytter til meg, men jeg synes ventetiden er for lang."
# tokenize input
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=512)
# Run inference
with torch.no_grad():
outputs = model(**inputs)
# Get predictions
threshold = 0.5
probs = torch.sigmoid(outputs.logits).squeeze()
predictions = [model.config.id2label[i] for i, prob in enumerate(probs) if prob > threshold]
print(predictions)
# -> ['staff_pos', 'avail_neg'] (healthcare providers and staff:positive, access and availability:negative)
Below is the distribution of coarse-grained labels within NorPaC_absa on the comment-level.
| Aspect full name | Short-name | Instances |
|---|---|---|
| Healthcare providers and staff | staff | 1169 |
| Organization of health services | org | 502 |
| Access and availability | avail | 353 |
| Environment and facilities | env | 286 |
| Treatment | treat | 591 |
| Uncategorized / Top-level aspects | ||
| Outcome and impact of treatment / stay | oits | 319 |
| Patient involvement and participation | pip | 68 |
| General | gen | 1376 |
| No aspect / Neutral | no-asp | 92 |
| Total | 4756 |
As this model is fine-tuned on all of the data splits, there is no current evaluation. Performance metrics is however available in our paper for a version of the model trained on the train split.
Coming.
Base model
ltg/norbert3-large