|
|
|
# Hugging Face with Bias Data in CoNLL Format |
|
|
|
## Introduction |
|
|
|
This README provides guidance on how to use the Hugging Face platform with bias-tagged datasets in the CoNLL format. |
|
Such datasets are essential for studying and mitigating bias in AI models. |
|
This dataset is curated by **Shaina Raza**. |
|
The methods and formatting discussed here are based on the seminal work "Nbias: A natural language processing framework for BIAS identification in text" by Raza et al. (2024) (see citation below). |
|
|
|
## Prerequisites |
|
- Install the Hugging Face `transformers` and `datasets` libraries: |
|
```bash |
|
pip install transformers datasets |
|
``` |
|
|
|
## Data Format |
|
|
|
Bias data in CoNLL format can be structured similarly to standard CoNLL, but with labels indicating bias instead of named entities: |
|
``` |
|
The O |
|
book O |
|
written B-BIAS |
|
by I-BIAS |
|
egoist I-BIAS |
|
women I-BIAS |
|
is O |
|
good O |
|
. O |
|
|
|
``` |
|
Here, `B-` prefixes indicate the beginning of a biased term,`I-` indicates inside biased terms, and `O` stands for outside any biased entity. |
|
|
|
## Steps to Use with Hugging Face |
|
|
|
1. **Loading Bias-tagged CoNLL Data with Hugging Face** |
|
- If your bias-tagged dataset in CoNLL format is publicly available on the Hugging Face `datasets` hub, use: |
|
```python |
|
from datasets import load_dataset |
|
dataset = load_dataset("newsmediabias/BIAS-CONLL") |
|
``` |
|
- For custom datasets, ensure they are formatted correctly and use a local path to load them. |
|
If the dataset is gated/private, make sure you have run `huggingface-cli login` |
|
|
|
|
|
2. **Preprocessing the Data** |
|
- Tokenization: |
|
```python |
|
from transformers import AutoTokenizer |
|
tokenizer = AutoTokenizer.from_pretrained("YOUR_PREFERRED_MODEL_CHECKPOINT") |
|
tokenized_input = tokenizer(dataset['train']['tokens']) |
|
``` |
|
|
|
3. **Training a Model on Bias-tagged CoNLL Data** |
|
- Depending on your task, you may fine-tune a model on the bias data using Hugging Face's `Trainer` class or native PyTorch/TensorFlow code. |
|
|
|
4. **Evaluation** |
|
- After training, evaluate the model's ability to recognize and possibly mitigate bias. |
|
- This might involve measuring the model's precision, recall, and F1 score on recognizing bias in text. |
|
|
|
5. **Deployment** |
|
- Once satisfied with the model's performance, deploy it for real-world applications, always being mindful of its limitations and potential implications. |
|
|
|
|
|
|
|
Please cite us if you use it. |
|
|
|
**Reference to cite us** |
|
``` |
|
@article{raza2024nbias, |
|
title={Nbias: A natural language processing framework for BIAS identification in text}, |
|
author={Raza, Shaina and Garg, Muskan and Reji, Deepak John and Bashir, Syed Raza and Ding, Chen}, |
|
journal={Expert Systems with Applications}, |
|
volume={237}, |
|
pages={121542}, |
|
year={2024}, |
|
publisher={Elsevier} |
|
} |
|
``` |