Update README.md
Browse files
README.md
CHANGED
@@ -1,54 +1,62 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
-
|
4 |
-
-
|
5 |
-
- full
|
6 |
-
- generated_from_trainer
|
7 |
-
model-index:
|
8 |
-
- name: scaleup_STEM_merged_10M_MOE_sft_0428_256
|
9 |
-
results: []
|
10 |
---
|
|
|
11 |
|
12 |
-
|
13 |
-
should probably proofread and complete it, then remove this comment. -->
|
14 |
|
15 |
-
|
16 |
|
17 |
-
|
18 |
|
19 |
-
## Intended uses & limitations
|
20 |
|
21 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
|
23 |
-
## Training
|
|
|
24 |
|
25 |
-
|
|
|
26 |
|
27 |
-
## Training procedure
|
28 |
|
29 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
|
31 |
-
The following hyperparameters were used during training:
|
32 |
-
- learning_rate: 5e-06
|
33 |
-
- train_batch_size: 4
|
34 |
-
- eval_batch_size: 8
|
35 |
-
- seed: 42
|
36 |
-
- distributed_type: multi-GPU
|
37 |
-
- num_devices: 256
|
38 |
-
- total_train_batch_size: 1024
|
39 |
-
- total_eval_batch_size: 2048
|
40 |
-
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
41 |
-
- lr_scheduler_type: cosine
|
42 |
-
- lr_scheduler_warmup_ratio: 0.05
|
43 |
-
- num_epochs: 3.0
|
44 |
|
45 |
-
### Training results
|
46 |
|
|
|
|
|
|
|
47 |
|
|
|
|
|
48 |
|
49 |
-
### Framework versions
|
50 |
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: mit
|
3 |
+
language:
|
4 |
+
- en
|
|
|
|
|
|
|
|
|
|
|
5 |
---
|
6 |
+
# 🦣 MAmmoTH2: Scaling Instructions from the Web
|
7 |
|
8 |
+
Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/)
|
|
|
9 |
|
10 |
+
Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548)
|
11 |
|
12 |
+
Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2)
|
13 |
|
|
|
14 |
|
15 |
+
## Introduction
|
16 |
+
Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 34% on MATH and from 36% to 67% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities.
|
17 |
+
| | **Base Model** | **MAmmoTH2** | **MAmmoTH2-Plus** |
|
18 |
+
|------|------------------|-------------------------------------------------------------------|------------------------------------------------------------------|
|
19 |
+
| 7B | Mistral | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus) |
|
20 |
+
| 8B | Llama-3 | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus) |
|
21 |
+
| 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) |
|
22 |
+
## Training Data
|
23 |
+
(WEBINSTRUCT) Coming soon...
|
24 |
+
![Project Framework](webinstruct.png)
|
25 |
|
26 |
+
## Training Procedure
|
27 |
+
The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details.
|
28 |
|
29 |
+
## Evaluation
|
30 |
+
The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results:
|
31 |
|
|
|
32 |
|
33 |
+
| **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** |
|
34 |
+
|------------------------|---------------|----------|-----------|----------|-------------|---------|-----------|---------|
|
35 |
+
| **MAmmoTH2-7B** | 26.7 | 34.2 | 67.4 | 34.8 | 60.6 | 60.0 | 81.8 | 52.2 |
|
36 |
+
| **MAmmoTH2-8B** | 29.7 | 33.4 | 67.9 | 38.4 | 61.0 | 60.8 | 81.0 | 53.1 |
|
37 |
+
| **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 |
|
38 |
+
| **MAmmoTH2-7B-Plus** | 29.2 | 45.0 | 84.7 | 36.8 | 64.5 | 63.1 | 83.0 | 58.0 |
|
39 |
+
| **MAmmoTH2-8B-Plus** | 32.5 | 42.8 | 84.1 | 37.3 | 65.7 | 67.8 | 83.4 | 59.1 |
|
40 |
+
| **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 |
|
41 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
42 |
|
|
|
43 |
|
44 |
+
## Usage
|
45 |
+
You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution.
|
46 |
+
Check our Github repo for more advanced use: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2)
|
47 |
|
48 |
+
## Limitations
|
49 |
+
We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively.
|
50 |
|
|
|
51 |
|
52 |
+
## Citation
|
53 |
+
If you use the models, data, or code from this project, please cite the original paper:
|
54 |
+
|
55 |
+
```
|
56 |
+
@article{yue2024mammoth2,
|
57 |
+
title={MAmmoTH2: Scaling Instructions from the Web},
|
58 |
+
author={Xiang Yue, Tuney Zheng, Ge Zhang, Wenhu Chen},
|
59 |
+
journal={arXiv preprint arXiv:2405.03548v1},
|
60 |
+
year={2024}
|
61 |
+
}
|
62 |
+
```
|