MathScale: Scaling Instruction Tuning for Mathematical Reasoning
Abstract
Large language models (LLMs) have demonstrated remarkable capabilities in problem-solving. However, their proficiency in solving mathematical problems remains inadequate. We propose MathScale, a simple and scalable method to create high-quality mathematical reasoning data using frontier LLMs (e.g., {\tt GPT-3.5}). Inspired by the cognitive mechanism in human mathematical learning, it first extracts topics and knowledge points from seed math questions and then build a concept graph, which is subsequently used to generate new math questions. MathScale exhibits effective scalability along the size axis of the math dataset that we generate. As a result, we create a mathematical reasoning dataset (MathScaleQA) containing two million math question-answer pairs. To evaluate mathematical reasoning abilities of LLMs comprehensively, we construct {\sc MwpBench}, a benchmark of Math Word Problems, which is a collection of ten datasets (including GSM8K and MATH) covering K-12, college, and competition level math problems. We apply MathScaleQA to fine-tune open-source LLMs (e.g., LLaMA-2 and Mistral), resulting in significantly improved capabilities in mathematical reasoning. Evaluated on {\sc MwpBench}, MathScale-7B achieves state-of-the-art performance across all datasets, surpassing its best peers of equivalent size by 42.9\% in micro average accuracy and 43.7\% in macro average accuracy, respectively.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MathGenie: Generating Synthetic Data with Question Back-translation for Enhancing Mathematical Reasoning of LLMs (2024)
- Synthetic Data (Almost) from Scratch: Generalized Instruction Tuning for Language Models (2024)
- Augmenting Math Word Problems via Iterative Question Composing (2024)
- SciGLM: Training Scientific Language Models with Self-Reflective Instruction Annotation and Tuning (2024)
- OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
We have reproduced MathScale! Check it here:)
Data: https://huggingface.co/datasets/fdqerq22ds/MathScaleQA-2M
Model: https://huggingface.co/fdqerq22ds/MathScale-Mistral
Models citing this paper 11
Browse 11 models citing this paperDatasets citing this paper 8
Browse 8 datasets citing this paperSpaces citing this paper 0
No Space linking this paper