File size: 4,322 Bytes
8e2e5e5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ea27931
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1ff7b8a
0b4f932
1ff7b8a
0b4f932
1ff7b8a
0b4f932
1ff7b8a
0b4f932
 
 
 
 
 
 
 
 
 
 
 
 
1ff7b8a
0b4f932
1ff7b8a
0b4f932
1ff7b8a
0b4f932
1ff7b8a
f4c2c3e
1ff7b8a
0b4f932
f9668aa
 
 
1ff7b8a
0b4f932
1ff7b8a
0b4f932
 
 
 
 
 
 
 
 
 
 
 
1ff7b8a
0b4f932
1ff7b8a
0b4f932
1ff7b8a
0b4f932
1ff7b8a
0b4f932
1ff7b8a
0b4f932
 
1ff7b8a
ad59c36
3b6b444
26a9005
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
---
dataset_info:
  features:
  - name: 'Unnamed: 0'
    dtype: int64
  - name: text
    dtype: string
  - name: id
    dtype: string
  - name: link
    dtype: string
  - name: token_count
    dtype: int64
  - name: section
    dtype: string
  - name: domain
    dtype: string
  - name: score
    dtype: float64
  - name: int_score
    dtype: int64
  - name: language
    dtype: string
  - name: language_probability
    dtype: float64
  splits:
  - name: train
    num_bytes: 1106487193
    num_examples: 270137
  download_size: 653993961
  dataset_size: 1106487193
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: mit
task_categories:
- text-generation
language:
- en
- yo
- ha
- ig
tags:
- finance
- legal
- music
- art
- medical
- chemistry
- biology
size_categories:
- 100K<n<1M
---
# Naijaweb Dataset

**Naijaweb** is a dataset that contains over 270,000+ documents, totaling approximately **230 million GPT-2 tokens**. The data was web scraped from web pages popular among Nigerians, providing a rich resource for modeling Nigerian linguistic and cultural contexts.

## Dataset Summary

| Features       | Data Types  |
|----------------|-------------|
| Unnamed: 0     | int64       |
| text           | string      |
| id             | string      |
| link           | string      |
| token_count    | int64       |
| section        | string      |
| domain         | string      |
| score          | float64     |
| int_score      | int64       |
| language       | string      |
| language_probability | float64 |

## Data Collection

The dataset was collected from **Nairaland.com**, extracting **1,795,908 unique posts** from 19 different sections of the site. Additionally, **1,289,195 outbound links** were extracted from these posts. The content of these web pages was extracted using **Trafilatura**, a popular library for web scraping and content extraction.

## Data Cleaning

The cleaning process was conducted using **[Datatrove](https://github.com/huggingface/datatrove)**, the same library employed in cleaning the **[FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb)** dataset, which is known for its high quality. The data cleaning process involved multiple stages of deduplication, filtering, and normalization to ensure the dataset's quality matches that of other high-performing datasets.

### Data Cleaning Procedure:
- **URL Filtering** 
- **Repitition and quality filtering:**
- **Personal Identifiable Information (PII) Removal** 

## Example Entry

Each data point contains the following fields:
- `Unnamed: 0`: an index column
- `text`: the main body of the post or web page
- `id`: unique identifier for each document
- `link`: the original URL of the source content
- `token_count`: the number of tokens in the `text` field
- `section`: the Nairaland section where the post was found
- `domain`: the domain of the outbound link
- `score`: a float representing the content's relevance or quality
- `int_score`: an integer representation of `score`
- `language`: detected language of the text (e.g., `en`, `yo`, `ha`, `ig`)
- `language_probability`: the confidence score of the language detection algorithm

## Data Splits

- **Training Split:** 270,137 examples (620MB in size)

## How to Load the Dataset

To load the dataset using Hugging Face's `datasets` library:

```python
from datasets import load_dataset

dataset = load_dataset("saheedniyi/naijaweb")
```

## Social Impact

Naijaweb was created to make Nigerian web data more accessible, providing researchers and developers with a dataset rich in Nigerian contexts across various domains such as **Politics**, **Education**, **Business**, and **Health**.

## Bias and Ethical Considerations

Since the data is collected from publicly available web pages, inherent biases present in the sources may be reflected in the dataset. These biases can manifest in areas such as **language**, **ideology**, or **topic representation**. Users should be mindful of these potential biases when developing models, especially for sensitive areas like **legal** or **medical** information.

## Sections of the Dataset

The dataset comprises content from 19 different sections of **Nairaland.com**, covering topics such as **Politics**, **Education**, **Business**, and **Health**.