Update README.md
Browse files
README.md
CHANGED
@@ -3,6 +3,7 @@ language:
|
|
3 |
- en
|
4 |
tags:
|
5 |
- Text Classification
|
|
|
6 |
widget:
|
7 |
- text: "Nevertheless, Trump and other Republicans have tarred the protests as havens for terrorists intent on destroying property."
|
8 |
example_title: "Biased example 1"
|
@@ -14,21 +15,18 @@ widget:
|
|
14 |
example_title: "Non-Biased example 2"
|
15 |
---
|
16 |
|
17 |
-
##
|
|
|
18 |
|
19 |
- Dataset : MBAD Data
|
20 |
-
-
|
21 |
-
|
22 |
-
- Train-Test split : 90 : 10
|
23 |
-
- tokenizer : distilbert-base-uncased
|
24 |
-
- model : distilbert-base-uncased
|
25 |
-
- optimizer : adam
|
26 |
-
- Learning rate : 5e-5
|
27 |
-
- Model parameters : 66,955,010
|
28 |
-
- epochs : 30
|
29 |
- Train accuracy : 0.7697
|
30 |
- Test accuracy : 0.62
|
31 |
- Train loss : 0.4506
|
32 |
- Test loss : 0.9644
|
33 |
-
|
|
|
|
|
|
|
34 |
|
|
|
3 |
- en
|
4 |
tags:
|
5 |
- Text Classification
|
6 |
+
co2_eq_emissions: 0.319355 Kg
|
7 |
widget:
|
8 |
- text: "Nevertheless, Trump and other Republicans have tarred the protests as havens for terrorists intent on destroying property."
|
9 |
example_title: "Biased example 1"
|
|
|
15 |
example_title: "Non-Biased example 2"
|
16 |
---
|
17 |
|
18 |
+
## About the Model
|
19 |
+
This model is trained to detect bias in a sentence.
|
20 |
|
21 |
- Dataset : MBAD Data
|
22 |
+
- Carbon emission 0.319355 Kg
|
23 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
- Train accuracy : 0.7697
|
25 |
- Test accuracy : 0.62
|
26 |
- Train loss : 0.4506
|
27 |
- Test loss : 0.9644
|
28 |
+
|
29 |
+
## Author
|
30 |
+
This model is part of the Research topic "Bias and Fairness in AI" conducted by Shaina Raza, Deepak John Reji, Chen Ding. If you use this work (code, model or dataset), please cite as:
|
31 |
+
> Bias & Fairness in AI, (2020), GitHub repository, <https://github.com/dreji18/Fairness-in-AI/tree/dev>
|
32 |
|