
The dataset viewer is not available for this split.
Error code: FeaturesError Exception: ArrowInvalid Message: Schema at index 1 was different: shards: list<item: struct<column_encodings: list<item: string>, column_names: list<item: string>, column_sizes: list<item: null>, compression: string, format: string, hashes: list<item: null>, raw_data: struct<basename: string, bytes: int64, hashes: struct<>>, samples: int64, size_limit: int64, version: int64, zip_data: struct<basename: string, bytes: int64, hashes: struct<>>>> version: int64 vs total_duplicated_tokens: int64 total_tokens_written: int64 total_tokens_skipped: int64 percentiles: struct<0th: int64, 10th: int64, 20th: int64, 30th: int64, 40th: int64, 50th: int64, 60th: int64, 70th: int64, 80th: int64, 90th: int64, 95th: int64, 99th: int64, 100th: int64> Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head return next(iter(self.iter(batch_size=n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter for key, example in iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__ for key, pa_table in self._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow yield from self.ex_iterable._iter_arrow() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 527, in _iter_arrow yield new_key, pa.Table.from_batches(chunks_buffer) File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Schema at index 1 was different: shards: list<item: struct<column_encodings: list<item: string>, column_names: list<item: string>, column_sizes: list<item: null>, compression: string, format: string, hashes: list<item: null>, raw_data: struct<basename: string, bytes: int64, hashes: struct<>>, samples: int64, size_limit: int64, version: int64, zip_data: struct<basename: string, bytes: int64, hashes: struct<>>>> version: int64 vs total_duplicated_tokens: int64 total_tokens_written: int64 total_tokens_skipped: int64 percentiles: struct<0th: int64, 10th: int64, 20th: int64, 30th: int64, 40th: int64, 50th: int64, 60th: int64, 70th: int64, 80th: int64, 90th: int64, 95th: int64, 99th: int64, 100th: int64>
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
MMBERT Decay Phase Data
Phase 3 of 3: Annealed language learning decay phase (100B tokens) with massive multilingual expansion to 1833 languages.
π Data Composition
NOTE: there are multiple decay data mixtures: this mixture described below is the Decay-Cont mixture. However, the data in this repository is the Decay-Eng. If you are interested in the others, please let me know so I can prioritize it.
Data Source | Tokens (B) | Percentage | Description |
---|---|---|---|
FineWeb2 | 78.5 | 76.0% | High-quality multilingual web crawl data |
Wikipedia (MegaWika) | 9.5 | 9.2% | Encyclopedia articles (1833 languages) |
Arxiv | 3.3 | 3.2% | Academic preprints |
Textbooks (ProLong) | 3.1 | 3.0% | Educational content |
Code (ProLong) | 2.8 | 2.7% | Code repositories and files |
Books | 2.2 | 2.1% | Literature and reference books |
DCLM (Dolmino) | 2.0 | 2.0% | High-quality English web data |
Tulu Flan | 1.0 | 1.0% | Instruction-following data |
Starcoder | 0.5 | 0.5% | Code repositories |
Dolmino Math | 0.5 | 0.5% | Mathematical content |
Total | 103.3 | 100.0% | Optimized for rapid language acquisition |
π Massive Language Coverage
This phase dramatically expands language coverage to 1833 languages, implementing the novel Cascading Annealed Language Learning (ALL) approach:
- Temperature Schedule: Ο=0.3 (most uniform sampling)
- Low-resource Focus: Includes 1723 new languages with minimal data
- Rapid Learning: Demonstrates 68% performance improvement on Tigray and 26% on Faroese
- Script Diversity: Covers virtually all writing systems in FineWeb2
Key Innovation: Annealed Language Learning
Rather than training on all languages simultaneously, MMBERT uses a cascading approach:
- Phase 1: 60 high-resource languages (Ο=0.7)
- Phase 2: 110 languages including mid-resource (Ο=0.5)
- Phase 3: 1833 languages with focus on low-resource (Ο=0.3)
This enables rapid learning of new languages while maintaining performance on high-resource ones.
βοΈ Key Features
- Ultra-low Masking: 5% mask rate for optimal learning efficiency
- Model Merging: Three decay variants (English-focused, 110-lang, 1833-lang) merged using TIES. This is the English focused version.
- Quality Focus: Emphasizes highest-quality data sources
π Usage
For decay phase training, see the ModernBERT repo: https://github.com/AnswerDotAI/ModernBERT
Direct Access
Use the script at this link to load any section of the dataset on the fly. This will fail if you try to access too many samples though, due to HF rate-limiting. To download the full dataset, use HF Hub's Snapshot Download.
π― Performance Impact
The decay phase demonstrates remarkable efficiency in low-resource language learning:
- Tigray (TiQuAD): 68% improvement (12.1 F1 points) from including the language
- Faroese (FoQA): 26% improvement (15.4 F1 points)
- SOTA Performance: Can even outperforms GPT-4o, Gemini 2.5 Pro
- Rapid Acquisition: Significant gains with only 100B tokens of exposure
π Related Resources
- Models: mmBERT Model Suite
- Phase 1: Pre-training Data (2.3T tokens)
- Phase 2: Mid-training Data (600B tokens)
- Checkpoints: Training Checkpoints
- Paper: Arxiv link
- Code: GitHub Repository
Citation
@misc{marone2025mmbertmodernmultilingualencoder,
title={mmBERT: A Modern Multilingual Encoder with Annealed Language Learning},
author={Marc Marone and Orion Weller and William Fleshman and Eugene Yang and Dawn Lawrie and Benjamin Van Durme},
year={2025},
eprint={2509.06888},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2509.06888},
}
- Downloads last month
- 2,971
Models trained or fine-tuned on jhu-clsp/mmBERT-decay-data
