lewtun HF staff commited on
Commit
35828e9
1 Parent(s): e17c587

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +152 -22
README.md CHANGED
@@ -5,13 +5,85 @@ tags:
5
  - generated_from_trainer
6
  - math
7
  - aimo
8
- datasets:
9
- - AI-MO/NuminaMath-TIR
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  model-index:
11
- - name: qwen2-72b-sft-aimo_v03.00
12
  results: []
13
  license: other
14
  license_name: tongyi-qianwen
 
 
15
  language:
16
  - en
17
  ---
@@ -19,24 +91,76 @@ language:
19
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
20
  should probably proofread and complete it, then remove this comment. -->
21
 
22
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/huggingface/h4/runs/bxdbewkc)
23
- # qwen2-72b-sft-aimo_v03.00
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
- This model is a fine-tuned version of [AI-MO/qwen2-72b-sft](https://huggingface.co/AI-MO/qwen2-72b-sft) on the AI-MO/numina-dataset-tora-v1.0-release-candidate-1 dataset.
26
- It achieves the following results on the evaluation set:
27
- - Loss: 0.4792
28
 
29
  ## Model description
30
 
31
- More information needed
 
 
 
 
 
 
 
 
 
32
 
33
  ## Intended uses & limitations
34
 
35
- More information needed
36
 
37
- ## Training and evaluation data
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
- More information needed
40
 
41
  ## Training procedure
42
 
@@ -56,19 +180,25 @@ The following hyperparameters were used during training:
56
  - lr_scheduler_warmup_ratio: 0.1
57
  - num_epochs: 4
58
 
59
- ### Training results
60
-
61
- | Training Loss | Epoch | Step | Validation Loss |
62
- |:-------------:|:-----:|:----:|:---------------:|
63
- | 0.3939 | 1.0 | 797 | 0.3793 |
64
- | 0.2618 | 2.0 | 1594 | 0.3876 |
65
- | 0.1141 | 3.0 | 2391 | 0.4310 |
66
- | 0.0363 | 4.0 | 3188 | 0.4792 |
67
-
68
 
69
  ### Framework versions
70
 
71
  - Transformers 4.42.3
72
  - Pytorch 2.3.0+cu121
73
  - Datasets 2.18.0
74
- - Tokenizers 0.19.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - generated_from_trainer
6
  - math
7
  - aimo
8
+ widget:
9
+ - example_title: Math problem
10
+ messages:
11
+ - role: user
12
+ content: >-
13
+ For how many values of the constant $k$ will the polynomial $x^{2}+kx+36$
14
+ have two distinct integer roots?
15
+ output:
16
+ text: >-
17
+ ### Solution:
18
+
19
+ 1- For the polynomial \\( x^2 + kx + 36 \\) to have two distinct integer
20
+ roots, let's denote these roots by \\( r_1 \\) and \\( r_2 \\).
21
+
22
+
23
+
24
+ 2- According to Vieta's formulas, the sum of the roots \\( r_1 + r_2 \\)
25
+ is equal to \\(-k\\), and the product of the roots \\( r_1 \\cdot r_2 \\)
26
+ is equal to 36.
27
+
28
+
29
+
30
+ 3- To find the distinct integer pairs \\((r_1, r_2)\\) whose product is
31
+ 36, we need to determine all pairs of integers \\((r_1, r_2)\\) such that
32
+ \\( r_1 \\cdot r_2 = 36 \\) and \\( r_1 \\neq r_2 \\).
33
+
34
+
35
+
36
+ 4- From the equation \\( r_1 + r_2 = -k \\), for each valid pair \\((r_1,
37
+ r_2)\\), we can compute the corresponding value of \\( k \\).
38
+
39
+
40
+
41
+ 5- Finally, since we need the polynomial to have two distinct integer
42
+ roots, we need to ensure that \\( r_1 \\) and \\( r_2 \\) are distinct.
43
+
44
+ Let's start by finding all pairs \\((r_1, r_2)\\) such that \\( r_1 \\cdot
45
+ r_2 = 36 \\). We'll then determine the values of \\( k \\) and ensure the
46
+ roots are distinct.
47
+
48
+ ```python import itertools
49
+
50
+ # Find all pairs (r1, r2) such that r1 * r2 = 36 product_36 = 36
51
+ factor_pairs = []
52
+
53
+ for i in range(1, product_36 + 1):
54
+ if product_36 % i == 0:
55
+ pair = (i, product_36 // i)
56
+ if pair[0] != pair[1]: # Ensure distinct pairs
57
+ factor_pairs.append(pair)
58
+
59
+ # Calculate k for each pair and ensure distinct integer roots
60
+ valid_k_values = set()
61
+ for r1, r2 in factor_pairs:
62
+ if r1 != r2:
63
+ k = -(r1 + r2)
64
+ valid_k_values.add(k)
65
+
66
+ print((len(valid_k_values), sorted(valid_k_values)))
67
+ ```
68
+
69
+ ```output
70
+ (4, [-37, -20, -15,-13])
71
+ ```
72
+ The distinct integer values of \\( k \\) that make the
73
+ polynomial \\( x^2 + kx + 36 \\) have two distinct integer roots are
74
+ \\(-37, -20, -15, \\text{and} -13\\).
75
+
76
+ Therefore, the number of such values of \\( k \\) is:
77
+
78
+ [ \\boxed{4} \\]
79
+ pipeline_tag: text-generation
80
  model-index:
81
+ - name: NuminaMath-72B-TIR
82
  results: []
83
  license: other
84
  license_name: tongyi-qianwen
85
+ datasets:
86
+ - AI-MO/NuminaMath-TIR
87
  language:
88
  - en
89
  ---
 
91
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
92
  should probably proofread and complete it, then remove this comment. -->
93
 
94
+ <img src="https://huggingface.co/AI-MO/NuminaMath-7B-TIR/resolve/main/thumbnail.png" alt="Numina Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
95
+
96
+
97
+ # Model Card for NuminaMath 72B TIR
98
+
99
+ NuminaMath is a series of language models that are trained to solve math problems using tool-integrated reasoning (TIR). NuminaMath 7B TIR won the first progress prize of the [AI Math Olympiad (AIMO)](https://aimoprize.com), with a score of 29/50 on the public and private tests sets.
100
+
101
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6200d0a443eb0913fa2df7cc/NyhBs_gzg40iwL995DO9L.png)
102
+
103
+ This model is a fine-tuned version of [deepseek-ai/deepseek-math-7b-base](https://huggingface.co/deepseek-ai/deepseek-math-7b-base) with two stages of supervised fine-tuning:
104
+
105
+ * **Stage 1:** fine-tune the base model on a large, diverse dataset of natural language math problems and solutions, where each solution is templated with Chain of Thought (CoT) to facilitate reasoning.
106
+ * **Stage 2:** fine-tune the model from Stage 1 on a synthetic dataset of tool-integrated reasoning, where each math problem is decomposed into a sequence of rationales, Python programs, and their outputs.
107
+
108
 
 
 
 
109
 
110
  ## Model description
111
 
112
+ - **Model type:** A 72B parameter math LLM fine-tuned on a dataset with 860k+ math problem-solution pairs.
113
+ - **Language(s) (NLP):** Primarily English
114
+ - **License:** Tongyi Qianwen
115
+ - **Finetuned from model:** [Qwen/Qwen2-72B](https://huggingface.co/Qwen/Qwen2-72B)
116
+
117
+ ### Model Sources
118
+
119
+ <!-- Provide the basic links for the model. -->
120
+
121
+ - **Repository:** https://github.com/project-numina/aimo-progress-prize
122
 
123
  ## Intended uses & limitations
124
 
125
+ Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
126
 
127
+ ```python
128
+ import re
129
+ import torch
130
+ from transformers import pipeline
131
+
132
+ pipe = pipeline("text-generation", model="AI-MO/NuminaMath-72B-TIR", torch_dtype=torch.bfloat16, device_map="auto")
133
+
134
+ messages = [
135
+ {"role": "user", "content": "For how many values of the constant $k$ will the polynomial $x^{2}+kx+36$ have two distinct integer roots?"},
136
+ ]
137
+ prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
138
+
139
+ gen_config = {
140
+ "max_new_tokens": 1024,
141
+ "do_sample": False,
142
+ "stop_strings": ["```output"], # Generate until Python code block is complete
143
+ "tokenizer": pipe.tokenizer,
144
+ }
145
+
146
+ outputs = pipe(prompt, **gen_config)
147
+ text = outputs[0]["generated_text"]
148
+ print(text)
149
+
150
+ # WARNING: This code will execute the python code in the string. We show this for eductional purposes only.
151
+ # Please refer to our full pipeline for a safer way to execute code.
152
+ python_code = re.findall(r"```python(.*?)```", text, re.DOTALL)[0]
153
+ exec(python_code)
154
+ ```
155
+
156
+ The above executes a single step of Python code - for more complex problems, you will want to run the logic for several steps to obtain the final solution.
157
+
158
+ ## Bias, Risks, and Limitations
159
+
160
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
161
+
162
+ NuminaMath 72B TIR was created to solve problems in the narrow domain of competition-level mathematics. As a result, the model should not be used for general chat applications. With greedy decoding, we find the model is capable of solving problems at the level of [AMC 12](https://artofproblemsolving.com/wiki/index.php/2023_AMC_12A_Problems), but often struggles generate a valid solution on harder problems at the AIME and Math Olympiad level. The model also struggles to solve geometry problems, likely due to it's limited capacity and lack of other modalities like vision.
163
 
 
164
 
165
  ## Training procedure
166
 
 
180
  - lr_scheduler_warmup_ratio: 0.1
181
  - num_epochs: 4
182
 
 
 
 
 
 
 
 
 
 
183
 
184
  ### Framework versions
185
 
186
  - Transformers 4.42.3
187
  - Pytorch 2.3.0+cu121
188
  - Datasets 2.18.0
189
+ - Tokenizers 0.19.1
190
+
191
+ ## Citation
192
+
193
+ If you find NuminaMath 7B TIR is useful in your work, please cite it with:
194
+
195
+ ```
196
+ @misc{numina_math_7b,
197
+ author = {Edward Beeching and Shengyi Costa Huang and Albert Jiang and Jia Li and Benjamin Lipkin and Zihan Qina and Kashif Rasul and Ziju Shen and Roman Soletskyi and Lewis Tunstall},
198
+ title = {NuminaMath 7B TIR},
199
+ year = {2024},
200
+ publisher = {Numina & Hugging Face},
201
+ journal = {Hugging Face repository},
202
+ howpublished = {\url{https://huggingface.co/AI-MO/NuminaMath-7B-TIR}}
203
+ }
204
+ ```