saheedniyi commited on
Commit
0b4f932
1 Parent(s): 1ff7b8a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -24
README.md CHANGED
@@ -53,43 +53,64 @@ tags:
53
  size_categories:
54
  - 100K<n<1M
55
  ---
 
56
 
57
- 🇳🇬 Naijaweb
58
 
59
- Naijaweb is a dataset consist about 270,000+ documents (230 Million GPT2 tokens), webscraped from pages interested in by Nigerians.
60
 
61
- Data Collection
62
- The data was colected by extracting 1,795,908 unique posts from 19 sections on Nairaland.com and about 1,289,195 outbond links were extracted from the posts.
63
- The web pages were then extracted using Trafilatura.
 
 
 
 
 
 
 
 
 
 
64
 
65
- Data cleaning
66
- The data was then cleaned using datatrove, same library that was used to clean the recently released and high performing fineweb edu dataset.
67
- So itv isn't far fetched to say this dataset is of the same quality as the fineweb dataset
68
- Dataset cleaning procedure:
69
 
70
- #### Flowchart of how it was cleaned
71
 
72
- An example of a typical w of the dataset looks like
73
- ```
74
- ```
75
 
76
- ### Data fields
77
 
78
- ## How to load the dataset
 
 
 
 
79
 
80
- ## Social Impact of Dataset
81
- With the release of this dataset we aim to make model training more accessible to the machine learning community at large.
82
 
83
- While multiple open-weights models with strong performance have been publicly released in the past, more often than not these releases are not accompanied by the corresponding training dataset. This is unfortunate as the dataset specificities and characteristics have been demonstrated to have a very large impact and role in the performances of the models. As the creation of a high quality training dataset is a fundamental requirement to training an LLM capable of excelling at downstream tasks, with 🍷 FineWeb we (a) not only make the dataset creation process more transparent, by sharing our entire processing setup including the codebase used, we also (b) help alleviate the costs of dataset curation, both in time and in compute, for model creators by publicly releasing our dataset with the community.
 
 
 
 
 
 
 
 
 
 
 
84
 
 
85
 
86
- ### Discussion of Biases
87
- Efforts were made to minimize the amount of NSFW and toxic content present in the dataset by employing filtering on the URL level. However, there are still a significant number of documents present in the final dataset that could be considered toxic or contain harmful content. As 🍷 FineWeb was sourced from the web as a whole, any harmful biases typically present in it may be reproduced on our dataset.
88
 
89
- We deliberately avoided using machine learning filtering methods that define text quality based on the similarity to a “gold” source such as wikipedia or toxicity classifiers as these methods have been known to disproportionately remove content in specific dialects and overclassify as toxic text related to specific social identities, respectively.
90
 
 
91
 
92
- ### Sections of the datasets
 
93
 
94
-
95
- ### Citation
 
53
  size_categories:
54
  - 100K<n<1M
55
  ---
56
+ # Naijaweb Dataset
57
 
58
+ **Naijaweb** is a dataset that contains over 270,000+ documents, totaling approximately **230 million GPT-2 tokens**. The data was web scraped from web pages popular among Nigerians, providing a rich resource for modeling Nigerian linguistic and cultural contexts.
59
 
60
+ ## Dataset Summary
61
 
62
+ | Features | Data Types |
63
+ |----------------|-------------|
64
+ | Unnamed: 0 | int64 |
65
+ | text | string |
66
+ | id | string |
67
+ | link | string |
68
+ | token_count | int64 |
69
+ | section | string |
70
+ | domain | string |
71
+ | score | float64 |
72
+ | int_score | int64 |
73
+ | language | string |
74
+ | language_probability | float64 |
75
 
76
+ ## Data Collection
 
 
 
77
 
78
+ The dataset was collected from **Nairaland.com**, extracting **1,795,908 unique posts** from 19 different sections of the site. Additionally, **1,289,195 outbound links** were extracted from these posts. The content of these web pages was extracted using **Trafilatura**, a popular library for web scraping and content extraction.
79
 
80
+ ## Data Cleaning
 
 
81
 
82
+ The cleaning process was conducted using **Datatrove**, the same library employed in cleaning the **FineWeb edu** dataset, which is known for its high quality. The data cleaning process involved multiple stages of deduplication, filtering, and normalization to ensure the dataset's quality matches that of other high-performing datasets.
83
 
84
+ ### Data Cleaning Procedure:
85
+ - **Step 1:** Remove duplicate posts and links.
86
+ - **Step 2:** Filter out non-Nigerian context posts based on domain analysis.
87
+ - **Step 3:** Normalize textual content, removing HTML artifacts and irrelevant metadata.
88
+ - **Step 4:** Language detection and correction based on predicted language probabilities.
89
 
90
+ ## Example Entry
 
91
 
92
+ Each data point contains the following fields:
93
+ - `Unnamed: 0`: an index column
94
+ - `text`: the main body of the post or web page
95
+ - `id`: unique identifier for each document
96
+ - `link`: the original URL of the source content
97
+ - `token_count`: the number of tokens in the `text` field
98
+ - `section`: the Nairaland section where the post was found
99
+ - `domain`: the domain of the outbound link
100
+ - `score`: a float representing the content's relevance or quality
101
+ - `int_score`: an integer representation of `score`
102
+ - `language`: detected language of the text (e.g., `en`, `yo`, `ha`, `ig`)
103
+ - `language_probability`: the confidence score of the language detection algorithm
104
 
105
+ ## Data Splits
106
 
107
+ - **Training Split:** 270,137 examples (620MB in size)
 
108
 
109
+ ## How to Load the Dataset
110
 
111
+ To load the dataset using Hugging Face's `datasets` library:
112
 
113
+ ```python
114
+ from datasets import load_dataset
115
 
116
+ dataset = load_dataset("path_to_naijaweb_dataset", split='train')