InternLM-Math commited on
Commit
4235e3e
1 Parent(s): 08ad504

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +129 -0
README.md ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ license: other
4
+ language:
5
+ - en
6
+ tags:
7
+ - math
8
+ datasets:
9
+ - internlm/Lean-Workbook
10
+ - internlm/Lean-Github
11
+ ---
12
+
13
+ # InternLM2.5-Step-Prover
14
+
15
+ <div align="center">
16
+
17
+ <img src="https://raw.githubusercontent.com/InternLM/InternLM/main/assets/logo.svg" width="200"/>
18
+ <div> </div>
19
+ <div align="center">
20
+ <b><font size="5">InternLM-Math</font></b>
21
+ <sup>
22
+ <a href="https://internlm.intern-ai.org.cn/">
23
+ <i><font size="4">HOT</font></i>
24
+ </a>
25
+ </sup>
26
+ <div> </div>
27
+ </div>
28
+
29
+ A state-of-the-art LEAN4 step prover.
30
+
31
+ [💻 Github](https://github.com/InternLM/InternLM-Math) [📊Dataset](https://huggingface.co/datasets/internlm/Lean-Github) [📖 Paper](https://arxiv.org/abs/2410.15700)
32
+ </div>
33
+
34
+ InternLM2.5-Step-Prover-Critic is a 1.8B critic model which achieves state-of-the-art performances on MiniF2F, ProofNet, and Putnam math benchmarks, showing its formal math proving ability in multiple domains.
35
+
36
+ # Dialogue Example
37
+ ```python
38
+ import torch
39
+ from transformers import AutoModel, AutoTokenizer
40
+
41
+ model = AutoModel.from_pretrained(
42
+ "internlm/internlm2_5-step-prover-critic",
43
+ device_map="cuda",
44
+ torch_dtype=torch.float16,
45
+ trust_remote_code=True,
46
+ )
47
+ tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2_5-step-prover-critic", trust_remote_code=True)
48
+
49
+ chat_1 = [
50
+ {"role": "user", "content": "Which state is closer to 'no goals'?"},
51
+ {"role": "assistant", "content": "no goals"}
52
+ ]
53
+ chat_2 = [
54
+ {"role": "user", "content": "Which state is closer to 'no goals'?"},
55
+ {"role": "assistant", "content": "x : ℕ\nh₀ : ↑x + 4 / 100 * ↑x = 598\n⊢ 100 * x = 100 * 575"}
56
+ ]
57
+
58
+ score1 = model.get_score(tokenizer, chat_1)
59
+ score2 = model.get_score(tokenizer, chat_2)
60
+ print("score1: ", score1)
61
+ print("score2: ", score2)
62
+ ```
63
+
64
+ # Performance
65
+
66
+ ## MiniF2F
67
+ | Method | Model size | Pass | miniF2F-valid | miniF2F-test |
68
+ |--------|------------|------|---------------|--------------|
69
+ | **Whole-Proof Generation Methods** |
70
+ | GPT-4-turbo 0409 | - | 64 | 25.4% | 23.0% |
71
+ | DeepSeekMath-Base | 7B | 128 | 25.4% | 27.5% |
72
+ | DeepSeek-Prover | 7B | 1 | - | 30.0% |
73
+ | | | 64 | - | 46.3% |
74
+ | | | 128 | - | 46.3% |
75
+ | | | 8192 | - | 48.8% |
76
+ | | | 65536 | - | 50.0% |
77
+ | | | cumulative | *60.2%* | *52.0%* |
78
+ | DeepSeek-Prover-1.5 | 7B | 32 | - | 63.5% |
79
+ | TheoremLlama | - | cumulative | 36.5% | 33.6% |
80
+ | **Tree Search Methods** |
81
+ | COPRA (GPT-3.5) | - | 1 | - | 9.0% |
82
+ | COPRA (GPT-4) | - | 1 | - | 26.6% |
83
+ | DSP(Isabelle) | 540B | 100 | 42.6% | 38.9% |
84
+ | Proof Artifact Co-Training | 837M | 1 | 23.9% | 24.6% |
85
+ | | | 8 | 29.3% | 29.2% |
86
+ | ReProver | 229M | 1 | - | 25.0% |
87
+ | Llemma | 7B | 1 | 26.2% | 26.2% |
88
+ | Llemma | 34B | 1 | 27.9% | 25.8% |
89
+ | Curriculum Learning | 837M | 1 | 33.6% | 29.6% |
90
+ | | | 8 | 41.2% | 34.5% |
91
+ | | | 64 | 47.3% | 36.6% |
92
+ | Hypertree Proof Search | 600M | cumulative | 58.6% | - |
93
+ | | | 64 | - | 41.0% |
94
+ | Lean-STaR | 7B | 64 | - | 46.3% |
95
+ | InternLM2-Math | 7B | 1 | 29.9% | 30.3% |
96
+ | InternLM2-Math-Plus | 7B | 1 | - | 43.4% |
97
+ | InternLM2-Step-Prover | 7B | 1 | 59.8% | 48.8% |
98
+ | InternLM2.5-Step-Prover | 7B | 1 | 55.4% | 47.3% |
99
+ | InternLM2.5-Step-Prover+Critic | 7B | 256 | **69.6%** | **65.9%** |
100
+
101
+
102
+ ## Proofnet & Putnam
103
+ | Method | Model size | Pass | result |
104
+ |--------|------------|------|--------|
105
+ | **ProofNet benchmark** |
106
+ | ReProver | 229M | 1 | 13.8% |
107
+ | InternLM2-Step-Prover | 7B | 1 | 18.1% |
108
+ | InternLM2.5-Step-Prover | 7B | 256 | **27.0%** |
109
+ | **Putnam benchmark** |
110
+ | GPT-4 | - | 10 | 1/640 |
111
+ | COPRA (GPT-4) | - | 10 | 1/640 |
112
+ | DSP(Isabelle) | 540B | 10 | 4/640 |
113
+ | ReProver | 229M | 1 | 0/640 |
114
+ | InternLM2-Step-Prover | 7B | 1 | 5/640 |
115
+ | InternLM2.5-Step-Prover | 7B | 1 | **6/640** |
116
+
117
+
118
+ # Citation and Tech Report
119
+ ```
120
+ @misc{wu2024internlm25stepproveradvancingautomatedtheorem,
121
+ title={InternLM2.5-StepProver: Advancing Automated Theorem Proving via Expert Iteration on Large-Scale LEAN Problems},
122
+ author={Zijian Wu and Suozhi Huang and Zhejian Zhou and Huaiyuan Ying and Jiayu Wang and Dahua Lin and Kai Chen},
123
+ year={2024},
124
+ eprint={2410.15700},
125
+ archivePrefix={arXiv},
126
+ primaryClass={cs.AI},
127
+ url={https://arxiv.org/abs/2410.15700},
128
+ }
129
+ ```