Update README.md
Browse files
README.md
CHANGED
@@ -28,6 +28,9 @@ Here give some examples of how to use our model.
|
|
28 |
|
29 |
**Chat Completion**
|
30 |
|
|
|
|
|
|
|
31 |
|
32 |
```python
|
33 |
import torch
|
@@ -40,7 +43,7 @@ model.generation_config = GenerationConfig.from_pretrained(model_name)
|
|
40 |
model.generation_config.pad_token_id = model.generation_config.eos_token_id
|
41 |
|
42 |
messages = [
|
43 |
-
{"role": "user", "content": "what is the integral of x^2 from 0 to 2
|
44 |
]
|
45 |
input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
|
46 |
outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100)
|
|
|
28 |
|
29 |
**Chat Completion**
|
30 |
|
31 |
+
❗❗❗ Please use chain-of-thought prompt to test DeepSeekMath-Instruct and DeepSeekMath-RL:
|
32 |
+
* English questions: `{question}\nPlease reason step by step, and put your final answer within \\boxed{}.`
|
33 |
+
* Chinese questions: `{question}\n请通过逐步推理来解答问题,并把最终答案放置于\\boxed{}中。`
|
34 |
|
35 |
```python
|
36 |
import torch
|
|
|
43 |
model.generation_config.pad_token_id = model.generation_config.eos_token_id
|
44 |
|
45 |
messages = [
|
46 |
+
{"role": "user", "content": "what is the integral of x^2 from 0 to 2?\nPlease reason step by step, and put your final answer within \\boxed{}."}
|
47 |
]
|
48 |
input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
|
49 |
outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100)
|