File size: 2,084 Bytes
c8a1822 a5cddfe d2883e2 ffeaa1c bbad8b3 ffeaa1c c8a1822 a5cddfe 6112ca0 5d9a776 a5cddfe 33c0b77 a5cddfe 66fbf0a a5cddfe 5d9a776 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
---
language:
- en
tags:
- Text Classification
co2_eq_emissions: 0.319355 Kg
widget:
- text: "Nevertheless, Trump and other Republicans have tarred the protests as havens for terrorists intent on destroying property."
example_title: "Biased example 1"
- text: "Christians should make clear that the perpetuation of objectionable vaccines and the lack of alternatives is a kind of coercion."
example_title: "Biased example 2"
- text: "But it was Hawley’s keynote address at the National Conservatism Conference that nailed down who he is, what he believes, and where his party is going in a way that should be absolutely terrifying for every American."
example_title: "Non-Biased example 1"
- text: "While emphasizing he’s not singling out either party, Cohen warned about the danger of normalizing white supremacist ideology."
example_title: "Non-Biased example 2"
---
## About the Model
An English Classification model, trained on MBAD Dataset to detect bias and fairness in sentences.
- Dataset : MBAD Data
- Carbon emission 0.319355 Kg
| Train Accuracy | Validation Accuracy | Train loss | Test loss |
|---------------:| -------------------:| ----------:|----------:|
| 76.97 | 62.00 | 0.45 | 0.96 |
## Usage
```python
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("dreji18/bias-detection-model", use_auth_token=True)
model = TFAutoModelForSequenceClassification.from_pretrained("dreji18/bias-detection-model", use_auth_token=True)
classifier = pipeline('text-classification', model=model, tokenizer=tokenizer) # cuda = 0,1 based on gpu availability
classifier("While emphasizing he’s not singling out either party, Cohen warned about the danger of normalizing white supremacist ideology.")
```
## Author
This model is part of the Research topic "Bias and Fairness in AI" conducted by Shaina Raza, Deepak John Reji, Chen Ding. If you use this work (code, model or dataset), please cite as:
> Bias & Fairness in AI, (2020), GitHub repository, <https://github.com/dreji18/Fairness-in-AI/tree/dev>
|