Dataset Viewer (First 5GB)
Auto-converted to Parquet
json
dict
__key__
stringlengths
9
9
__url__
stringclasses
7 values
{"image_info":[null,{"image_base64":"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBk(...TRUNCATED)
000000017
"hf://datasets/weizhiwang/OBELICS_HQ_5M_UniFilter@e2ff02a084c9da834785aa4079701299c1dfa50a/00000000.(...TRUNCATED)
{"image_info":[null,null,{"image_base64":"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDA(...TRUNCATED)
000000022
"hf://datasets/weizhiwang/OBELICS_HQ_5M_UniFilter@e2ff02a084c9da834785aa4079701299c1dfa50a/00000000.(...TRUNCATED)
{"image_info":[{"image_base64":"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8U(...TRUNCATED)
000000029
"hf://datasets/weizhiwang/OBELICS_HQ_5M_UniFilter@e2ff02a084c9da834785aa4079701299c1dfa50a/00000000.(...TRUNCATED)
{"image_info":[null,{"image_base64":"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBk(...TRUNCATED)
000000032
"hf://datasets/weizhiwang/OBELICS_HQ_5M_UniFilter@e2ff02a084c9da834785aa4079701299c1dfa50a/00000000.(...TRUNCATED)
{"image_info":[null,{"image_base64":"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBk(...TRUNCATED)
000000038
"hf://datasets/weizhiwang/OBELICS_HQ_5M_UniFilter@e2ff02a084c9da834785aa4079701299c1dfa50a/00000000.(...TRUNCATED)
{"image_info":[{"image_base64":"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8U(...TRUNCATED)
000000043
"hf://datasets/weizhiwang/OBELICS_HQ_5M_UniFilter@e2ff02a084c9da834785aa4079701299c1dfa50a/00000000.(...TRUNCATED)
{"image_info":[null,{"image_base64":"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBk(...TRUNCATED)
000000054
"hf://datasets/weizhiwang/OBELICS_HQ_5M_UniFilter@e2ff02a084c9da834785aa4079701299c1dfa50a/00000000.(...TRUNCATED)
{"image_info":[null,{"image_base64":"/9j/4AAQSkZJRgABAQAAAQABAAD//gA+Q1JFQVRPUjogZ2QtanBlZyB2MS4wICh(...TRUNCATED)
000000056
"hf://datasets/weizhiwang/OBELICS_HQ_5M_UniFilter@e2ff02a084c9da834785aa4079701299c1dfa50a/00000000.(...TRUNCATED)
{"image_info":[{"image_base64":"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8U(...TRUNCATED)
000000066
"hf://datasets/weizhiwang/OBELICS_HQ_5M_UniFilter@e2ff02a084c9da834785aa4079701299c1dfa50a/00000000.(...TRUNCATED)
{"image_info":[null,{"image_base64":"/9j/4AAQSkZJRgABAQAAAQABAAD//gAuSGFuZG1hZGUgU29mdHdhcmUsIEluYy4(...TRUNCATED)
000000077
"hf://datasets/weizhiwang/OBELICS_HQ_5M_UniFilter@e2ff02a084c9da834785aa4079701299c1dfa50a/00000000.(...TRUNCATED)
End of preview. Expand in Data Studio

UniFilter Synthetic Training Data (unifilter_train_data)

This repository contains UniFilter-Post-Train-Data, the synthetic training data used for the UniFilter model, as presented in the paper Train a Unified Multimodal Data Quality Classifier with Synthetic Data.

UniFilter is an efficient Multimodal Large Language Model (MLLM) designed as a Unified Multimodal Data Quality Classifier. It filters high-quality image-text caption and interleaved document data by generating quality scores. This dataset facilitates the training of UniFilter, enabling MLLMs pre-trained on the filtered data to demonstrate significantly enhanced capabilities in zero-shot reasoning and in-context learning.

Abstract

The Multimodal Large Language Models (MLLMs) are continually pre-trained on a mixture of image-text caption data and interleaved document data, while the high-quality data filtering towards image-text interleaved document data is under-explored. We propose to train an efficient MLLM as a Unified Mulitmodal Data Quality Classifier to Filter both high-quality image-text caption and interleaved data (UniFilter). To address the challenge of collecting diverse labeled multimodal data, we introduce a semi-synthetic approach that leverages readily available raw images and generates corresponding text across four quality levels. This method enables efficient creation of sample-score pairs for both caption and interleaved document data to train UniFilter. We apply UniFilter to curate high-quality caption data from DataComp caption dataset and interleaved data from the OBELICS image-text interleaved dataset. MLLMs pre-trained on the filtered data demonstrate significantly enhanced capabilities compared to those trained on baseline-filtered data, achieving stronger zero-shot reasoning and in-context learning capabilities. After visual supervised fine-tuning, these UniFilter-induced MLLMs achieve stronger performance on various benchmarks, highlighting the downstream benefits of high-quality multimodal pre-training. We release the synthetic training data used for training UniFilter, the UniFilter model checkpoints, and the high-quality interleaved document subset OBELICS-HQ, curated by UniFilter, to the community for reproduction and further development.

Dataset Description

This dataset (UniFilter-Post-Train-Data) consists of large-scale (multimodal data example, quality score) pairs, which contains both caption data and interleaved document data. This synthetic multimodal example-score paired data is used for training the UniFilter model, a Unified Multimodal Data Quality Classifier that can generate quality scores for both image-text caption and interleaved document data. These quality scores can then be used for filtering high-quality data to significantly strengthen the capability of pre-trained MLLMs.

Sample Usage: UniFilter Training

This dataset is used for training the UniFilter classifier. The following sections are excerpted from the UniFilter GitHub repository, detailing the installation and training process that consumes this dataset.

Installation

If you just require the quality score generation, please install the customized LLaVA package only.

conda create -n unifilter python=3.10
conda activate unifilter
pip install -e LLaVA
pip install flash-attn==2.5.2 --no-build-isolation

Data Preparation for UniFilter Training

UniFilter is trained a large-scale set of (multimodal data example, quality score) pairs, which contains both caption data and interleaved document data. The synthetic multimodal example-score paired data are available at UniFilter-Post-Train-Data (this dataset).

UniFilter Training

We develop the UniFilter training and scoring codebase based on LLaVA-Unified repo, which is adapted from LLaVA with the support for recent LLMs and Vision Encoders. The architectural design of UniFilter contains three modules, the vision encoder, the visual projector, and the LLM Backbone. Different from a MLLM, the LLM Backbone does not have a language modeling head and we replace it with a score generation head. All these module parameters are specified with:

  • --mm_projector_type: visual projector, i.e. aapool_mlp representing average pooling vision projector with 144 tokens for one image
  • --vision_tower: vision encoder, i.e. SigLIP-SO-400M with 384px resolution
  • --model_name_or_path: LLM Backbone, i.e. Qwen2.5-0.5B-Instruct

Visual Projector Pre-Training (Stage 1)

Please download the 558K subset of the LLAVA-Pretrain caption dataset LLaVA-Pretrain.

Training script with DeepSpeed ZeRO-2: pretrain.sh.

UniFilter Classifier Training (Stage 2)

Training script with DeepSpeed ZeRO-3: train_classifier.sh.

Our training script will upload the metrics to wandb. The best UniFilter model is saved based on the best quality classification accuracy on the validation sets.

Citation

Please cite our paper if you find this repository interesting or helpful:

@article{UniFilter,
   title={Train a Unified Multimodal Data Quality Classifier with Synthetic Data},
   author={Wang, Weizhi and Lin, Rongmei and Li, Shiyang and Lockard, Colin and Sarkhel, Ritesh and Lokegaonkar, Sanket and Shang, Jingbo and Yan, Xifeng and Zalmout, Nasser and Li, Xian},
   journal={arXiv preprint arXiv:2510.15162},
   year={2025}
 }

Acknowledgement

  • LLaVA: the codebase we built upon for UniFilter training.
Downloads last month
690

Collection including weizhiwang/OBELICS_HQ_5M_UniFilter