Update README.md
Browse files
README.md
CHANGED
@@ -25,6 +25,16 @@ An English Classification model, trained on MBAD Dataset to detect bias and fair
|
|
25 |
|---------------:| -------------------:| ----------:|----------:|
|
26 |
| 76.97 | 62.00 | 0.45 | 0.96 |
|
27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
## Author
|
29 |
This model is part of the Research topic "Bias and Fairness in AI" conducted by Shaina Raza, Deepak John Reji, Chen Ding. If you use this work (code, model or dataset), please cite as:
|
30 |
> Bias & Fairness in AI, (2020), GitHub repository, <https://github.com/dreji18/Fairness-in-AI/tree/dev>
|
|
|
25 |
|---------------:| -------------------:| ----------:|----------:|
|
26 |
| 76.97 | 62.00 | 0.45 | 0.96 |
|
27 |
|
28 |
+
## Usage
|
29 |
+
```python
|
30 |
+
from transformers import pipeline
|
31 |
+
tokenizer = AutoTokenizer.from_pretrained("dreji18/bias-detection-model", use_auth_token=True)
|
32 |
+
model = TFAutoModelForSequenceClassification.from_pretrained("dreji18/bias-detection-model", use_auth_token=True)
|
33 |
+
|
34 |
+
classifier = pipeline('text-classification', model=model, tokenizer=tokenizer) # cuda = 0,1 based on gpu availability
|
35 |
+
classifier("While emphasizing he’s not singling out either party, Cohen warned about the danger of normalizing white supremacist ideology.")
|
36 |
+
```
|
37 |
+
|
38 |
## Author
|
39 |
This model is part of the Research topic "Bias and Fairness in AI" conducted by Shaina Raza, Deepak John Reji, Chen Ding. If you use this work (code, model or dataset), please cite as:
|
40 |
> Bias & Fairness in AI, (2020), GitHub repository, <https://github.com/dreji18/Fairness-in-AI/tree/dev>
|