saheedniyi
commited on
Commit
•
0b4f932
1
Parent(s):
1ff7b8a
Update README.md
Browse files
README.md
CHANGED
@@ -53,43 +53,64 @@ tags:
|
|
53 |
size_categories:
|
54 |
- 100K<n<1M
|
55 |
---
|
|
|
56 |
|
57 |
-
|
58 |
|
59 |
-
|
60 |
|
61 |
-
Data
|
62 |
-
|
63 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
|
65 |
-
Data
|
66 |
-
The data was then cleaned using datatrove, same library that was used to clean the recently released and high performing fineweb edu dataset.
|
67 |
-
So itv isn't far fetched to say this dataset is of the same quality as the fineweb dataset
|
68 |
-
Dataset cleaning procedure:
|
69 |
|
70 |
-
|
71 |
|
72 |
-
|
73 |
-
```
|
74 |
-
```
|
75 |
|
76 |
-
|
77 |
|
78 |
-
|
|
|
|
|
|
|
|
|
79 |
|
80 |
-
##
|
81 |
-
With the release of this dataset we aim to make model training more accessible to the machine learning community at large.
|
82 |
|
83 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
84 |
|
|
|
85 |
|
86 |
-
|
87 |
-
Efforts were made to minimize the amount of NSFW and toxic content present in the dataset by employing filtering on the URL level. However, there are still a significant number of documents present in the final dataset that could be considered toxic or contain harmful content. As 🍷 FineWeb was sourced from the web as a whole, any harmful biases typically present in it may be reproduced on our dataset.
|
88 |
|
89 |
-
|
90 |
|
|
|
91 |
|
92 |
-
|
|
|
93 |
|
94 |
-
|
95 |
-
### Citation
|
|
|
53 |
size_categories:
|
54 |
- 100K<n<1M
|
55 |
---
|
56 |
+
# Naijaweb Dataset
|
57 |
|
58 |
+
**Naijaweb** is a dataset that contains over 270,000+ documents, totaling approximately **230 million GPT-2 tokens**. The data was web scraped from web pages popular among Nigerians, providing a rich resource for modeling Nigerian linguistic and cultural contexts.
|
59 |
|
60 |
+
## Dataset Summary
|
61 |
|
62 |
+
| Features | Data Types |
|
63 |
+
|----------------|-------------|
|
64 |
+
| Unnamed: 0 | int64 |
|
65 |
+
| text | string |
|
66 |
+
| id | string |
|
67 |
+
| link | string |
|
68 |
+
| token_count | int64 |
|
69 |
+
| section | string |
|
70 |
+
| domain | string |
|
71 |
+
| score | float64 |
|
72 |
+
| int_score | int64 |
|
73 |
+
| language | string |
|
74 |
+
| language_probability | float64 |
|
75 |
|
76 |
+
## Data Collection
|
|
|
|
|
|
|
77 |
|
78 |
+
The dataset was collected from **Nairaland.com**, extracting **1,795,908 unique posts** from 19 different sections of the site. Additionally, **1,289,195 outbound links** were extracted from these posts. The content of these web pages was extracted using **Trafilatura**, a popular library for web scraping and content extraction.
|
79 |
|
80 |
+
## Data Cleaning
|
|
|
|
|
81 |
|
82 |
+
The cleaning process was conducted using **Datatrove**, the same library employed in cleaning the **FineWeb edu** dataset, which is known for its high quality. The data cleaning process involved multiple stages of deduplication, filtering, and normalization to ensure the dataset's quality matches that of other high-performing datasets.
|
83 |
|
84 |
+
### Data Cleaning Procedure:
|
85 |
+
- **Step 1:** Remove duplicate posts and links.
|
86 |
+
- **Step 2:** Filter out non-Nigerian context posts based on domain analysis.
|
87 |
+
- **Step 3:** Normalize textual content, removing HTML artifacts and irrelevant metadata.
|
88 |
+
- **Step 4:** Language detection and correction based on predicted language probabilities.
|
89 |
|
90 |
+
## Example Entry
|
|
|
91 |
|
92 |
+
Each data point contains the following fields:
|
93 |
+
- `Unnamed: 0`: an index column
|
94 |
+
- `text`: the main body of the post or web page
|
95 |
+
- `id`: unique identifier for each document
|
96 |
+
- `link`: the original URL of the source content
|
97 |
+
- `token_count`: the number of tokens in the `text` field
|
98 |
+
- `section`: the Nairaland section where the post was found
|
99 |
+
- `domain`: the domain of the outbound link
|
100 |
+
- `score`: a float representing the content's relevance or quality
|
101 |
+
- `int_score`: an integer representation of `score`
|
102 |
+
- `language`: detected language of the text (e.g., `en`, `yo`, `ha`, `ig`)
|
103 |
+
- `language_probability`: the confidence score of the language detection algorithm
|
104 |
|
105 |
+
## Data Splits
|
106 |
|
107 |
+
- **Training Split:** 270,137 examples (620MB in size)
|
|
|
108 |
|
109 |
+
## How to Load the Dataset
|
110 |
|
111 |
+
To load the dataset using Hugging Face's `datasets` library:
|
112 |
|
113 |
+
```python
|
114 |
+
from datasets import load_dataset
|
115 |
|
116 |
+
dataset = load_dataset("path_to_naijaweb_dataset", split='train')
|
|