File size: 3,135 Bytes
723b9f4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 |
---
license: apache-2.0
---
# Dataset Card for Low-Resource-Language-Dataset
This dataset is designed to aid Natural Language Processing (NLP) research on low-resource languages, particularly Urdu. It includes structured datasets and preprocessing tools curated from the BBC Urdu website.
## Dataset Details
### Dataset Description
This dataset contains articles, summaries, and topics scraped from [BBC Urdu](https://www.bbc.com/urdu). It is structured into training and testing datasets for machine learning applications. The data is preprocessed and tokenized to support text summarization and other NLP tasks.
- **Curated by:** Subayyal Sheikh and contributors.
- **Language(s):** Urdu.
- **License:** Apache 2.0 License.
### Dataset Sources
- **Repository:** [GitHub Repository](https://github.com/subayyal802/Low-Resource-Language-Dataset)
- **Paper:** *Breaking Language Barriers: Dataset Development for Low Resource Language*
- **Demo:** None (optional if available)
## Uses
### Direct Use
The dataset is ideal for:
- Text summarization
- Language modeling
- Topic classification
### Out-of-Scope Use
This dataset may not be suitable for tasks outside of Urdu language processing. Users are advised against malicious or inappropriate use of the data.
## Dataset Structure
The dataset is structured as follows:
- **JSONL Files**:
- `Articles`, `Topics`, and `Summaries` (raw and processed).
- Training and test splits (`Articles-train`, `Articles-test`, etc.).
- Preprocessed articles (`BBCArticle512`, `BBCArticle512J`, etc.).
- **Text Files**:
- Token lengths (`Length-Article-512`, `Length-Summary-512`, etc.).
- Logs and ratio calculations (`Log`, `Ratios-512`, `TestRatio-512`).
## Dataset Creation
### Curation Rationale
The dataset was created to address the lack of resources for NLP research in the Urdu language. It facilitates diverse NLP applications.
### Source Data
#### Data Collection and Processing
Data was collected using scraping scripts and processed to normalize and tokenize text into usable formats.
#### Who are the source data producers?
The source data is curated from publicly available articles on [BBC Urdu](https://www.bbc.com/urdu).
### Annotations
The dataset does not include human annotations; all content is machine-extracted and processed.
#### Personal and Sensitive Information
The dataset contains publicly available data and does not include personal or sensitive information.
## Bias, Risks, and Limitations
- The dataset may contain biases inherent in its source, such as topic selection by the BBC Urdu editorial team.
- Summaries and topics may not align perfectly with the user's context.
### Recommendations
Users should consider dataset limitations and apply domain knowledge when using the data for specific tasks.
## Citation
If you use this dataset in your research, please cite:
```bibtex
@article{Subayyal2024,
title={Breaking Language Barriers: Dataset Development for Low Resource Language},
author={Subayyal Sheikh, Yasir Jan, Masab A. Javaid, Ammad Khalil, Jebran Khan},
journal={PeerJ Computer Science},
year={2024}
}
|