Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
InfiMM-WebMath-40B / README.md
xiaotianhan's picture
Update README.md
3efcc8d verified
|
raw
history blame
6.09 kB
metadata
license: odc-by
task_categories:
  - text-generation
language:
  - en
  - zh
tags:
  - pretrain
  - multi-modal
size_categories:
  - 10B<n<100B

InfiMM-WebMath-40B Dataset

ArXiv| PDF

InfiMM-WebMath-40B is a large-scale, open-source multimodal dataset specifically designed for mathematical reasoning tasks. It incorporates both text and images, extracted from web documents, to advance the pre-training of Multimodal Large Language Models (MLLMs). The dataset is tailored to support sophisticated reasoning tasks that involve understanding both text and visual elements like diagrams, figures, and geometric plots.

Dataset Overview

The InfiMM-WebMath-40B dataset includes:

  • 24 million web documents.
  • 85 million image URLs.
  • 40 billion text tokens.

These documents were sourced from Common Crawl data snapshots (2019–2023), filtered to focus on high-quality mathematical and scientific content in both English and Chinese.

Data Structure

The dataset is organized in a format that captures both text and images in their original order, ensuring accurate interleaving between the two modalities. The structure is as follows:

{
  "URL": "...",           # The URL of the source document.
  "text_list": [...],     # List of extracted text segments, None if the element is an image.
  "image_list": [...],    # List of image URLs, None if the element is a text segment.
  "metadata": {...}       # Metadata containing information about the extraction process (e.g., processing details, timestamps).
  "metadata": {           # Metadata containing information about the extraction process (e.g., processing details, timestamps).
    "ft_lang_label",      # Type of languages detected by fastText
    "ft_lang_prob",       # Probability of type of language detected by fastText
    "math_prob",          # First round math content detection with high recal fastText model
    "size",
    "snap",               # Timestamp of Common Crawl snapshot
    "text_gpt3_token_len",
    "char_repetition_ratio",
    "word_repetition_ratio",
    "special_character_ratio",
    "punctuation_ratio",  
    "nsfw_num_words",     # Number of words which are NSFW
    "has_unicode_error",  # If there's any unicode error exists
    "math_prob_llama3",   # Probability of second round math detection with high precision fastText model
  }
}

Interleaved Text and Images

The text_list and image_list are designed as parallel arrays, maintaining the sequence of the document. This interleaving structure allows models to reconstruct the flow of the original document:

  • If text_list[i] contains text, then image_list[i] is None, indicating that the content at this position is text.
  • If text_list[i] is None, then image_list[i] contains a URL to an image at that position in the document.

This interleaving of text and images ensures that models trained on this dataset can process the content in the same way a human would, following the logical flow between text explanations and accompanying visual aids.

Data Collection and Filtering Pipeline

The InfiMM-WebMath-40B dataset was created through a comprehensive multi-stage filtering and extraction process, starting with over 120 billion web pages from the Common Crawl repository. The key steps in this pipeline are outlined below::

  1. Language Filtering: The first step involved filtering for English and Chinese content. We utilized Trafilatura to extract text from web pages, and LangDetect to efficiently identify the language, ensuring only relevant multilingual content was retained..
  2. High Recall Math Filtering: To capture as much math-related content as possible, we employed a modified version of Resiliparse for HTML parsing. In conjunction with a FastText model optimized for high recall, this phase ensured any potential mathematical data are preserved.
  3. Deduplication: MinHash were used for fuzzy text deduplication and web page URL exact matching for neighboring Common Crawl snapshots.
  4. Rule-Based Filtering: This step applied specific filtering rules to remove irrelevant or low-quality content, such as documents containing NSFW material or boilerplate “lorem ipsum,” enhancing the dataset’s overall quality.
  5. High Precision Math Filtering: A second pass was performed using a FastText model, this time tuned for high precision, to ensure only highly relevant mathematical content remained in the dataset. This refinement step further improved the dataset’s focus and relevance for mathematical reasoning tasks.
  6. Image Filtering: Finally, rule-based filtering was applied to images, removing irrelevant or extraneous visuals (e.g., logos, banners) to ensure that the remaining images were aligned with the mathematical content.

How to Use the Dataset

  1. Base Text Download: The dataset is available for download as a set of web documents with interleaved text and image URLs.
  2. Image Download: Users need to download images according to the image URLs provided.

Note

If you want more data with more precision, you can always use higher thresholds with math_prob and math_prob_llama3 fields in metadata.

License

InfiMM-WebMath-40B is made available under an ODC-By 1.0 license; users should also abide by the CommonCrawl ToU: https://commoncrawl.org/terms-of-use/. We do not alter the license of any of the underlying data.

Citation

@misc{han2024infimmwebmath40badvancingmultimodalpretraining,
      title={InfiMM-WebMath-40B: Advancing Multimodal Pre-Training for Enhanced Mathematical Reasoning}, 
      author={Xiaotian Han and Yiren Jian and Xuefeng Hu and Haogeng Liu and Yiqi Wang and Qihang Fan and Yuang Ai and Huaibo Huang and Ran He and Zhenheng Yang and Quanzeng You},
      year={2024},
      eprint={2409.12568},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2409.12568}, 
}