naijaweb / README.md
saheedniyi's picture
Update README.md
0b4f932 verified
|
raw
history blame
3.57 kB
metadata
dataset_info:
  features:
    - name: 'Unnamed: 0'
      dtype: int64
    - name: text
      dtype: string
    - name: id
      dtype: string
    - name: link
      dtype: string
    - name: token_count
      dtype: int64
    - name: section
      dtype: string
    - name: domain
      dtype: string
    - name: score
      dtype: float64
    - name: int_score
      dtype: int64
    - name: language
      dtype: string
    - name: language_probability
      dtype: float64
  splits:
    - name: train
      num_bytes: 1106487193
      num_examples: 270137
  download_size: 653993961
  dataset_size: 1106487193
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: mit
task_categories:
  - text-generation
language:
  - en
  - yo
  - ha
  - ig
tags:
  - finance
  - legal
  - music
  - art
  - medical
  - chemistry
  - biology
size_categories:
  - 100K<n<1M

Naijaweb Dataset

Naijaweb is a dataset that contains over 270,000+ documents, totaling approximately 230 million GPT-2 tokens. The data was web scraped from web pages popular among Nigerians, providing a rich resource for modeling Nigerian linguistic and cultural contexts.

Dataset Summary

Features Data Types
Unnamed: 0 int64
text string
id string
link string
token_count int64
section string
domain string
score float64
int_score int64
language string
language_probability float64

Data Collection

The dataset was collected from Nairaland.com, extracting 1,795,908 unique posts from 19 different sections of the site. Additionally, 1,289,195 outbound links were extracted from these posts. The content of these web pages was extracted using Trafilatura, a popular library for web scraping and content extraction.

Data Cleaning

The cleaning process was conducted using Datatrove, the same library employed in cleaning the FineWeb edu dataset, which is known for its high quality. The data cleaning process involved multiple stages of deduplication, filtering, and normalization to ensure the dataset's quality matches that of other high-performing datasets.

Data Cleaning Procedure:

  • Step 1: Remove duplicate posts and links.
  • Step 2: Filter out non-Nigerian context posts based on domain analysis.
  • Step 3: Normalize textual content, removing HTML artifacts and irrelevant metadata.
  • Step 4: Language detection and correction based on predicted language probabilities.

Example Entry

Each data point contains the following fields:

  • Unnamed: 0: an index column
  • text: the main body of the post or web page
  • id: unique identifier for each document
  • link: the original URL of the source content
  • token_count: the number of tokens in the text field
  • section: the Nairaland section where the post was found
  • domain: the domain of the outbound link
  • score: a float representing the content's relevance or quality
  • int_score: an integer representation of score
  • language: detected language of the text (e.g., en, yo, ha, ig)
  • language_probability: the confidence score of the language detection algorithm

Data Splits

  • Training Split: 270,137 examples (620MB in size)

How to Load the Dataset

To load the dataset using Hugging Face's datasets library:

from datasets import load_dataset

dataset = load_dataset("path_to_naijaweb_dataset", split='train')