Update README.md
Browse files
README.md
CHANGED
@@ -6,41 +6,63 @@ model-index:
|
|
6 |
results: []
|
7 |
---
|
8 |
|
9 |
-
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
-
|
13 |
|
14 |
-
This model was trained from scratch on an unknown dataset.
|
15 |
-
It achieves the following results on the evaluation set:
|
16 |
|
|
|
17 |
|
18 |
-
|
|
|
|
|
|
|
|
|
19 |
|
20 |
-
|
21 |
|
22 |
-
|
23 |
|
24 |
-
|
25 |
|
26 |
-
|
|
|
|
|
27 |
|
28 |
-
|
|
|
|
|
|
|
|
|
29 |
|
30 |
-
## Training procedure
|
31 |
|
32 |
-
|
|
|
|
|
33 |
|
34 |
-
|
35 |
-
|
36 |
-
|
|
|
37 |
|
38 |
-
|
|
|
39 |
|
|
|
40 |
|
|
|
41 |
|
42 |
-
|
|
|
|
|
43 |
|
44 |
-
- Transformers 4.25.1
|
45 |
-
- TensorFlow 2.11.0
|
46 |
-
- Tokenizers 0.13.2
|
|
|
6 |
results: []
|
7 |
---
|
8 |
|
9 |
+
---
|
10 |
+
tags:
|
11 |
+
- generated_from_keras_callback
|
12 |
+
model-index:
|
13 |
+
- name: XLM-T-Sent-Politics
|
14 |
+
results: []
|
15 |
+
---
|
16 |
+
|
17 |
+
# XLM-T-Sent-Politics
|
18 |
+
|
19 |
+
This is an "extension" of the `twitter-roberta-base-sentiment-latest` model ([model](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest), further finetuned with original Twitter data posted in English about the 10th anniversary of the 2010 Haiti Earthquake.
|
20 |
|
21 |
+
- Reference Paper: [Sentiment analysis (SA) (supervised and unsupervised classification) of original Twitter data posted in English about the 10th anniversary of the 2010 Haiti Earthquake](https://data.ncl.ac.uk/articles/dataset/Sentiment_analysis_SA_supervised_and_unsupervised_classification_of_original_Twitter_data_posted_in_English_about_the_10th_anniversary_of_the_2010_Haiti_Earthquake/19688040/1).
|
22 |
|
|
|
|
|
23 |
|
24 |
+
## Full classification example
|
25 |
|
26 |
+
```python
|
27 |
+
from transformers import AutoModelForSequenceClassification
|
28 |
+
from transformers import TFAutoModelForSequenceClassification
|
29 |
+
from transformers import AutoTokenizer
|
30 |
+
import numpy as np
|
31 |
|
32 |
+
class_mapping = {0: "Negative", 1: "Neutral", 2: "Positive"}
|
33 |
|
34 |
+
MODEL = "antypasd/twitter-roberta-base-sentiment-earthquake"
|
35 |
|
36 |
+
tokenizer = AutoTokenizer.from_pretrained(MODEL)
|
37 |
|
38 |
+
# PT
|
39 |
+
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
|
40 |
+
model.save_pretrained(MODEL)
|
41 |
|
42 |
+
text = "$202 million of $1.14 billion in United States (US) recovery aid went to a new 'industrial park' in Caracol, an area unaffected by the Haiti earthquake. The plan was to invite foreign garment companies to take advantage of extremely low-wage labor"
|
43 |
+
encoded_input = tokenizer(text, return_tensors='pt')
|
44 |
+
output = model(**encoded_input)
|
45 |
+
scores = output[0][0].detach().numpy()
|
46 |
+
prediction = np.argmax(scores)
|
47 |
|
|
|
48 |
|
49 |
+
# # TF
|
50 |
+
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
|
51 |
+
# model.save_pretrained(MODEL)
|
52 |
|
53 |
+
# encoded_input = tokenizer(text, return_tensors='tf')
|
54 |
+
# output = model(encoded_input)
|
55 |
+
# scores = output[0][0].numpy()
|
56 |
+
# prediction = np.argmax(scores)
|
57 |
|
58 |
+
# Print label
|
59 |
+
print(text, class_mapping[prediction])
|
60 |
|
61 |
+
```
|
62 |
|
63 |
+
Output:
|
64 |
|
65 |
+
```
|
66 |
+
Negative
|
67 |
+
```
|
68 |
|
|
|
|
|
|