Datasets:
Update ArXiv ID and paper links
#3
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -8,7 +8,7 @@ size_categories:
|
|
| 8 |
task_categories:
|
| 9 |
- text-generation
|
| 10 |
pretty_name: UltraData-Math
|
| 11 |
-
arxiv:
|
| 12 |
tags:
|
| 13 |
- llm
|
| 14 |
- pretraining
|
|
@@ -19,31 +19,31 @@ tags:
|
|
| 19 |
- mathematical-reasoning
|
| 20 |
configs:
|
| 21 |
- config_name: UltraData-Math-L3-Conversation-Synthetic
|
| 22 |
-
data_files:
|
| 23 |
- config_name: UltraData-Math-L3-Multi-Style-Synthetic
|
| 24 |
-
data_files:
|
| 25 |
- config_name: UltraData-Math-L3-QA-Synthetic
|
| 26 |
-
data_files:
|
| 27 |
- config_name: UltraData-Math-L3-Textbook-Exercise-Synthetic
|
| 28 |
-
data_files:
|
| 29 |
- config_name: UltraData-Math-L2-preview
|
| 30 |
-
data_files:
|
| 31 |
- config_name: UltraData-Math-L1
|
| 32 |
-
data_files:
|
| 33 |
default_config_name: UltraData-Math-L3-Conversation-Synthetic
|
| 34 |
---
|
| 35 |
|
| 36 |
# UltraData-Math
|
| 37 |
|
| 38 |
<div align="center">
|
| 39 |
-
<img src="assets/ultradata-math-logo.png" width="600"/>
|
| 40 |
</div>
|
| 41 |
|
| 42 |
<p align="center">
|
| 43 |
-
<a href="https://huggingface.co/datasets/openbmb/UltraData-Math">🤗 Dataset</a> | <a href="https://github.com/UltraData-OpenBMB/UltraData-Math">💻 Source Code</a> | <a href="https://huggingface.co/datasets/openbmb/UltraData-Math/blob/main/README_ZH.md">🇨🇳 中文 README</a>
|
| 44 |
</p>
|
| 45 |
|
| 46 |
-
***UltraData-Math*** is a large-scale, high-quality mathematical pre-training dataset totaling **290B+ tokens** across three progressive tiers—**L1** (170.5B tokens web corpus), **L2** (33.7B tokens quality-selected), and **L3** (88B tokens multi-format refined)—designed to systematically enhance mathematical reasoning in LLMs. It has been applied to the mathematical pre-training of the [MiniCPM Series](https://huggingface.co/collections/openbmb/minicpm4) models.
|
| 47 |
|
| 48 |
## 🆕 What's New
|
| 49 |
|
|
@@ -52,11 +52,7 @@ default_config_name: UltraData-Math-L3-Conversation-Synthetic
|
|
| 52 |
|
| 53 |
## 📚 Introduction
|
| 54 |
|
| 55 |
-
High-quality pre-training data is crucial for enhancing the mathematical reasoning capabilities of large language models (LLMs). However, existing mathematical pre-training data construction schemes
|
| 56 |
-
|
| 57 |
-
- **HTML Parsing**: General parsers (such as trafilatura, readability) are mainly designed for news/article parsing, lacking specialized processing for mathematical formulas and other content, often leading to formula structure destruction or loss; meanwhile, mathematical discussions on forum-like pages are difficult to extract completely.
|
| 58 |
-
- **Data Quality**: Existing datasets generally lack a systematic quality grading mechanism, with high-value mathematical content mixed with low-quality noise.
|
| 59 |
-
- **Data Diversity**: Mainstream datasets mostly originate from textbooks or competition question banks, lacking mathematical discussions and application scenarios in real web pages; synthetic data formats are single, difficult to cover diverse needs such as multi-turn dialogues and multi-style expressions.
|
| 60 |
|
| 61 |
To address these issues, we propose ***UltraData-Math***—a large-scale high-quality pre-training dataset for mathematical reasoning tasks. This dataset is developed based on the [UltraData](https://ultradata.openbmb.cn/blog/position-paper) L0-L4 Tiered Data Management Framework, containing four progressive levels:
|
| 62 |
|
|
@@ -75,67 +71,43 @@ Experiments show that on the MiniCPM-1.2B architecture, ***UltraData-Math*** ach
|
|
| 75 |
|
| 76 |
## 🏗️ Data Processing Pipeline
|
| 77 |
|
| 78 |
-
To break through the limitations of existing mathematical datasets in quality and diversity, we established a refined grading standard centered on "mathematical content integrity" and "information density". ***UltraData-Math*** adopts the **L0-L4 Tiered Data Management Framework** proposed by the [UltraData](https://
|
| 79 |
|
| 80 |
<div align="center">
|
| 81 |
-
<img src="assets/ultradata-math-pipeline.png" width="900"/>
|
| 82 |
</div>
|
| 83 |
|
| 84 |
### L0: Raw Data Parsing and Standardization
|
| 85 |
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
The L0 phase mainly processes raw web data obtained from sources such as Common Crawl. Given the specificity of mathematical web pages, we develop specialized parsing strategies through the [UltraData-Math-Parser](https://huggingface.co/spaces/openbmb/UltraData-Math-L0-Parser) instead of directly using general parsers like trafilatura or readability.
|
| 89 |
|
| 90 |
-
- **Unified Parsing Mode**: Automatically identifies page types to ensure complete content extraction
|
| 91 |
-
- **Multi-level Fallback Strategy**:
|
| 92 |
-
- **Mathematical Formula Standardization**:
|
| 93 |
|
| 94 |
### L1: Heuristic Cleaning and Filtering
|
| 95 |
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
After obtaining text containing complete mathematical formulas, we clean the L0 data through a series of heuristic rules:
|
| 99 |
|
| 100 |
-
- **Format Repair**:
|
| 101 |
-
|
| 102 |
-
- Remove irrelevant web noise such as navigation bars, footers, ad pop-ups, and "read more".
|
| 103 |
-
- **Content Filtering**:
|
| 104 |
-
- *Length Filtering*: Remove overly short text fragments, which usually lack context and are difficult to support effective mathematical reasoning training.
|
| 105 |
-
- *Language Identification*: Ensure the dataset is composed mainly of high-quality English and Chinese mathematical content.
|
| 106 |
-
- *Document Deduplication*: Perform deduplication at the document level to prevent duplicate content from biasing model training.
|
| 107 |
|
| 108 |
### L2: Selection Based on Quality Models
|
| 109 |
|
| 110 |
-
|
| 111 |
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
- **
|
| 115 |
-
- **Classifier Training and Distillation**: Train lightweight embedding classifiers based on annotated data to equip them with the ability to identify high-value mathematical content.
|
| 116 |
-
- **Full-scale Inference**: Use the trained classifier to score and screen L1 data in full.
|
| 117 |
-
- *Retention*: Content containing detailed problem-solving steps, mathematical concept explanations, and high-level academic discussions.
|
| 118 |
-
- *Exclusion*: Simple stacking of nouns, meaningless lists of numbers, juvenile content, or noise from non-mathematical fields.
|
| 119 |
|
| 120 |
### L3: Refined Data
|
| 121 |
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
Natural web data is mostly declarative text, lacking structured reasoning steps and diverse pedagogical formats. To enhance the model's chain-of-thought (CoT) capabilities and multi-turn interaction skills, we build the L3 refined data layer through the [UltraData-Math-Generator](https://huggingface.co/spaces/openbmb/UltraData-Math-L3-Generator):
|
| 125 |
-
|
| 126 |
-
- **Q&A Pair Generation**: Use high-performance models to rewrite declarative documents into "Question-Answer" pairs, constructing QA-style data with explicit reasoning steps.
|
| 127 |
-
- **Multi-turn Dialogue Synthesis**: Simulate "Teacher-Student" tutoring scenarios to generate multi-turn dialogue data containing follow-up questions, corrections, and guidance.
|
| 128 |
-
- **Multi-style Rewriting**: Rewrite single-source data into multiple styles (such as rigorous textbook style, competition problem-solving style, intuitive popular science style) to improve model generalization.
|
| 129 |
-
- **Knowledge Point Textbook Generation**: Generate systematic textbook-like content based on specific knowledge points to ensure the model masters core mathematical concepts.
|
| 130 |
-
- **Format Repair and Enhancement**: Fix formatting issues in the source data (e.g., broken LaTeX formulas, inconsistent notation) and enhance content coherence to achieve textbook-quality standards.
|
| 131 |
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
| UltraData-Math-L1 | 170.5B | 85.6M |
|
| 137 |
-
| UltraData-Math-L2-preview | 33.7B | 14.98M |
|
| 138 |
-
| UltraData-Math-L3 | 88B | 81.4M |
|
| 139 |
|
| 140 |
## 🚀 Quick Start
|
| 141 |
|
|
@@ -152,45 +124,18 @@ ds = load_dataset("openbmb/UltraData-Math", "UltraData-Math-L2-preview")
|
|
| 152 |
|
| 153 |
# Load UltraData-Math-L3 (default: Conversation-Synthetic)
|
| 154 |
ds = load_dataset("openbmb/UltraData-Math", "UltraData-Math-L3-Conversation-Synthetic")
|
| 155 |
-
|
| 156 |
-
# Other L3 configs:
|
| 157 |
-
# - UltraData-Math-L3-Multi-Style-Synthetic
|
| 158 |
-
# - UltraData-Math-L3-QA-Synthetic
|
| 159 |
-
# - UltraData-Math-L3-Textbook-Exercise-Synthetic
|
| 160 |
```
|
| 161 |
|
| 162 |
## 📈 Experimental Results
|
| 163 |
|
| 164 |
-
We evaluated data quality using the **Decay Verification** method
|
| 165 |
-
|
| 166 |
-
- **General English:** MMLU, ARC-E, ARC-C, BigBench Hard (BBH), CommonSenseQA, HellaSwag, OpenbookQA, PIQA, SIQA, Winogrande
|
| 167 |
-
- **General Chinese:** C-Eval, CMMLU
|
| 168 |
-
- **Math Reasoning:** MATH500, GSM8K, Math-Bench, R-Bench-Math
|
| 169 |
-
- **Code Reasoning:** MBPP, HumanEval
|
| 170 |
-
|
| 171 |
-
### Effectiveness of L0 Parsing Strategy
|
| 172 |
-
|
| 173 |
-
To fairly compare different parsing strategies, we conducted experiments on a data subset sampled from the **2023-2024** distribution. We re-parsed the raw HTML from this source using different parsers. This comparison demonstrates the **effectiveness of our L0 Parser** against other parsers.
|
| 174 |
-
|
| 175 |
-
<div align="center">
|
| 176 |
-
<img src="assets/ultradata-math-l0-parser-comparison.png" width="700"/>
|
| 177 |
-
</div>
|
| 178 |
-
|
| 179 |
|
| 180 |
### Pipeline Effectiveness (L1 vs L2 vs L3)
|
| 181 |
|
| 182 |
-
|
| 183 |
|
| 184 |
<div align="center">
|
| 185 |
-
<img src="assets/ultradata-math-l1l2l3-comparison.png" width="700"/>
|
| 186 |
-
</div>
|
| 187 |
-
|
| 188 |
-
### Full Evaluation Results
|
| 189 |
-
|
| 190 |
-
To compare against existing public mathematical pre-training datasets, we trained models independently on each dataset using the same model architecture and training budget (~100B tokens). The baselines include [Nemotron-CC-Math](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1), [MegaMath-Web-Pro](https://huggingface.co/datasets/LLM360/MegaMath), and [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath). All models are evaluated under identical conditions for a fair comparison:
|
| 191 |
-
|
| 192 |
-
<div align="center">
|
| 193 |
-
<img src="assets/ultradata-math-full-comparison.png" width="700"/>
|
| 194 |
</div>
|
| 195 |
|
| 196 |
## ❤️ Acknowledgements
|
|
@@ -204,6 +149,13 @@ To compare against existing public mathematical pre-training datasets, we traine
|
|
| 204 |
If you find **UltraData-Math** useful in your research, please consider citing:
|
| 205 |
|
| 206 |
```bibtex
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 207 |
@misc{ultradata-math,
|
| 208 |
title={UltraData-Math},
|
| 209 |
author={UltraData Team},
|
|
@@ -215,4 +167,4 @@ If you find **UltraData-Math** useful in your research, please consider citing:
|
|
| 215 |
|
| 216 |
## 📜 License
|
| 217 |
|
| 218 |
-
This project is licensed under the [Apache 2.0](./LICENSE) license.
|
|
|
|
| 8 |
task_categories:
|
| 9 |
- text-generation
|
| 10 |
pretty_name: UltraData-Math
|
| 11 |
+
arxiv: '2602.09003'
|
| 12 |
tags:
|
| 13 |
- llm
|
| 14 |
- pretraining
|
|
|
|
| 19 |
- mathematical-reasoning
|
| 20 |
configs:
|
| 21 |
- config_name: UltraData-Math-L3-Conversation-Synthetic
|
| 22 |
+
data_files: data/UltraData-Math-L3/Conversation-Synthetic/*.parquet
|
| 23 |
- config_name: UltraData-Math-L3-Multi-Style-Synthetic
|
| 24 |
+
data_files: data/UltraData-Math-L3/Multi-Style-Synthetic/*.parquet
|
| 25 |
- config_name: UltraData-Math-L3-QA-Synthetic
|
| 26 |
+
data_files: data/UltraData-Math-L3/QA-Synthetic/*.parquet
|
| 27 |
- config_name: UltraData-Math-L3-Textbook-Exercise-Synthetic
|
| 28 |
+
data_files: data/UltraData-Math-L3/Textbook-Exercise-Synthetic/*.parquet
|
| 29 |
- config_name: UltraData-Math-L2-preview
|
| 30 |
+
data_files: data/UltraData-Math-L2-preview/**/*.parquet
|
| 31 |
- config_name: UltraData-Math-L1
|
| 32 |
+
data_files: data/UltraData-Math-L1/**/*.parquet
|
| 33 |
default_config_name: UltraData-Math-L3-Conversation-Synthetic
|
| 34 |
---
|
| 35 |
|
| 36 |
# UltraData-Math
|
| 37 |
|
| 38 |
<div align="center">
|
| 39 |
+
<img src="https://huggingface.co/datasets/openbmb/UltraData-Math/resolve/main/assets/ultradata-math-logo.png" width="600"/>
|
| 40 |
</div>
|
| 41 |
|
| 42 |
<p align="center">
|
| 43 |
+
<a href="https://huggingface.co/datasets/openbmb/UltraData-Math">🤗 Dataset</a> | <a href="https://huggingface.co/papers/2602.09003">📄 Paper</a> | <a href="https://ultradata.openbmb.cn">🌐 Project Page</a> | <a href="https://github.com/UltraData-OpenBMB/UltraData-Math">💻 Source Code</a> | <a href="https://huggingface.co/datasets/openbmb/UltraData-Math/blob/main/README_ZH.md">🇨🇳 中文 README</a>
|
| 44 |
</p>
|
| 45 |
|
| 46 |
+
***UltraData-Math*** is a large-scale, high-quality mathematical pre-training dataset totaling **290B+ tokens** across three progressive tiers—**L1** (170.5B tokens web corpus), **L2** (33.7B tokens quality-selected), and **L3** (88B tokens multi-format refined)—designed to systematically enhance mathematical reasoning in LLMs. It was introduced in the paper [Data Science and Technology Towards AGI Part I: Tiered Data Management](https://huggingface.co/papers/2602.09003) and has been applied to the mathematical pre-training of the [MiniCPM Series](https://huggingface.co/collections/openbmb/minicpm4) models.
|
| 47 |
|
| 48 |
## 🆕 What's New
|
| 49 |
|
|
|
|
| 52 |
|
| 53 |
## 📚 Introduction
|
| 54 |
|
| 55 |
+
High-quality pre-training data is crucial for enhancing the mathematical reasoning capabilities of large language models (LLMs). However, existing mathematical pre-training data construction schemes often encounter issues with HTML parsing, data quality, and diversity.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 56 |
|
| 57 |
To address these issues, we propose ***UltraData-Math***—a large-scale high-quality pre-training dataset for mathematical reasoning tasks. This dataset is developed based on the [UltraData](https://ultradata.openbmb.cn/blog/position-paper) L0-L4 Tiered Data Management Framework, containing four progressive levels:
|
| 58 |
|
|
|
|
| 71 |
|
| 72 |
## 🏗️ Data Processing Pipeline
|
| 73 |
|
| 74 |
+
To break through the limitations of existing mathematical datasets in quality and diversity, we established a refined grading standard centered on "mathematical content integrity" and "information density". ***UltraData-Math*** adopts the **L0-L4 Tiered Data Management Framework** proposed by the [UltraData](https://huggingface.co/papers/2602.09003) paper.
|
| 75 |
|
| 76 |
<div align="center">
|
| 77 |
+
<img src="https://huggingface.co/datasets/openbmb/UltraData-Math/resolve/main/assets/ultradata-math-pipeline.png" width="900"/>
|
| 78 |
</div>
|
| 79 |
|
| 80 |
### L0: Raw Data Parsing and Standardization
|
| 81 |
|
| 82 |
+
The L0 phase mainly processes raw web data obtained from sources such as Common Crawl. Given the specificity of mathematical web pages, we develop specialized parsing strategies through the [UltraData-Math-Parser](https://huggingface.co/spaces/openbmb/UltraData-Math-L0-Parser).
|
|
|
|
|
|
|
| 83 |
|
| 84 |
+
- **Unified Parsing Mode**: Automatically identifies page types to ensure complete content extraction.
|
| 85 |
+
- **Multi-level Fallback Strategy**: Implementation of a multi-level fallback mechanism to ensure text content is captured even if structured parsing fails.
|
| 86 |
+
- **Mathematical Formula Standardization**: Unification of different mathematical expressions in web pages into standard LaTeX format.
|
| 87 |
|
| 88 |
### L1: Heuristic Cleaning and Filtering
|
| 89 |
|
| 90 |
+
Cleans noise through heuristic rules:
|
|
|
|
|
|
|
| 91 |
|
| 92 |
+
- **Format Repair**: Clean invisible characters, garbled text, and unnatural continuous line breaks.
|
| 93 |
+
- **Content Filtering**: Length filtering, language identification, and document-level deduplication.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 94 |
|
| 95 |
### L2: Selection Based on Quality Models
|
| 96 |
|
| 97 |
+
The L2 phase introduces a model-based quality assessment system:
|
| 98 |
|
| 99 |
+
- **Seed Data Annotation**: Use proprietary large models to score seed data.
|
| 100 |
+
- **Classifier Training and Distillation**: Train lightweight embedding classifiers based on annotated data.
|
| 101 |
+
- **Full-scale Inference**: Use the trained classifier to score and screen L1 data.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 102 |
|
| 103 |
### L3: Refined Data
|
| 104 |
|
| 105 |
+
Production of structured content with clear reasoning through the [UltraData-Math-Generator](https://huggingface.co/spaces/openbmb/UltraData-Math-L3-Generator):
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 106 |
|
| 107 |
+
- **Q&A Pair Generation**: Rewrite declarative documents into "Question-Answer" pairs.
|
| 108 |
+
- **Multi-turn Dialogue Synthesis**: Simulate "Teacher-Student" tutoring scenarios.
|
| 109 |
+
- **Multi-style Rewriting**: Rewrite single-source data into multiple styles.
|
| 110 |
+
- **Knowledge Point Textbook Generation**: Systematic textbook-like content based on specific knowledge points.
|
|
|
|
|
|
|
|
|
|
| 111 |
|
| 112 |
## 🚀 Quick Start
|
| 113 |
|
|
|
|
| 124 |
|
| 125 |
# Load UltraData-Math-L3 (default: Conversation-Synthetic)
|
| 126 |
ds = load_dataset("openbmb/UltraData-Math", "UltraData-Math-L3-Conversation-Synthetic")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 127 |
```
|
| 128 |
|
| 129 |
## 📈 Experimental Results
|
| 130 |
|
| 131 |
+
We evaluated data quality using the **Decay Verification** method by continuing pre-training of a **MiniCPM-1.2B** base model with **~100B tokens**.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 132 |
|
| 133 |
### Pipeline Effectiveness (L1 vs L2 vs L3)
|
| 134 |
|
| 135 |
+
Results demonstrate that higher-tier data (L3) significantly boosts mathematical reasoning (MATH500, GSM8K) and general capabilities.
|
| 136 |
|
| 137 |
<div align="center">
|
| 138 |
+
<img src="https://huggingface.co/datasets/openbmb/UltraData-Math/resolve/main/assets/ultradata-math-l1l2l3-comparison.png" width="700"/>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 139 |
</div>
|
| 140 |
|
| 141 |
## ❤️ Acknowledgements
|
|
|
|
| 149 |
If you find **UltraData-Math** useful in your research, please consider citing:
|
| 150 |
|
| 151 |
```bibtex
|
| 152 |
+
@article{wang2026tiered,
|
| 153 |
+
title={Data Science and Technology Towards AGI Part I: Tiered Data Management},
|
| 154 |
+
author={Yudong Wang and Zixuan Fu and Hengyu Zhao and Chen Zhao and Chuyue Zhou and Xinle Lin and Hongya Lyu and Shuaikang Xue and Yi Yi and Yingjiao Wang and Zhi Zheng and Yuzhou Zhang and Jie Zhou and Chaojun Xiao and Xu Han and Zhiyuan Liu and Maosong Sun},
|
| 155 |
+
journal={arXiv preprint arXiv:2602.09003},
|
| 156 |
+
year={2026}
|
| 157 |
+
}
|
| 158 |
+
|
| 159 |
@misc{ultradata-math,
|
| 160 |
title={UltraData-Math},
|
| 161 |
author={UltraData Team},
|
|
|
|
| 167 |
|
| 168 |
## 📜 License
|
| 169 |
|
| 170 |
+
This project is licensed under the [Apache 2.0](./LICENSE) license.
|