hiieu commited on
Commit
e4c2427
1 Parent(s): 91fce29

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -31
README.md CHANGED
@@ -15,6 +15,36 @@ base_model: meta-llama/Meta-Llama-3-8B-Instruct
15
  This model was fine-tuned on meta-llama/Meta-Llama-3-8B-Instruct for function calling and json mode.
16
 
17
  ## Usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
  ### Function Calling
20
  ```python
@@ -82,37 +112,6 @@ print(tokenizer.decode(response, skip_special_tokens=True))
82
  # >> The current temperature in Tokyo is 30 degrees Celsius.
83
  ```
84
 
85
- ### JSON Mode
86
- ```python
87
- messages = [
88
- {"role": "system", "content": "You are a helpful assistant, answer in JSON with key \"message\""},
89
- {"role": "user", "content": "Who are you?"},
90
- ]
91
-
92
- input_ids = tokenizer.apply_chat_template(
93
- messages,
94
- add_generation_prompt=True,
95
- return_tensors="pt"
96
- ).to(model.device)
97
-
98
- terminators = [
99
- tokenizer.eos_token_id,
100
- tokenizer.convert_tokens_to_ids("<|eot_id|>")
101
- ]
102
-
103
- outputs = model.generate(
104
- input_ids,
105
- max_new_tokens=256,
106
- eos_token_id=terminators,
107
- do_sample=True,
108
- temperature=0.6,
109
- top_p=0.9,
110
- )
111
- response = outputs[0][input_ids.shape[-1]:]
112
- print(tokenizer.decode(response, skip_special_tokens=True))
113
- # >> {"message": "I am a helpful assistant, with access to a vast amount of information. I can help you with tasks such as answering questions, providing definitions, translating text, and more. Feel free to ask me anything!"}
114
- ```
115
-
116
  # Uploaded model
117
 
118
  - **Developed by:** hiieu
 
15
  This model was fine-tuned on meta-llama/Meta-Llama-3-8B-Instruct for function calling and json mode.
16
 
17
  ## Usage
18
+ ### JSON Mode
19
+ ```python
20
+ messages = [
21
+ {"role": "system", "content": "You are a helpful assistant, answer in JSON with key \"message\""},
22
+ {"role": "user", "content": "Who are you?"},
23
+ ]
24
+
25
+ input_ids = tokenizer.apply_chat_template(
26
+ messages,
27
+ add_generation_prompt=True,
28
+ return_tensors="pt"
29
+ ).to(model.device)
30
+
31
+ terminators = [
32
+ tokenizer.eos_token_id,
33
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
34
+ ]
35
+
36
+ outputs = model.generate(
37
+ input_ids,
38
+ max_new_tokens=256,
39
+ eos_token_id=terminators,
40
+ do_sample=True,
41
+ temperature=0.6,
42
+ top_p=0.9,
43
+ )
44
+ response = outputs[0][input_ids.shape[-1]:]
45
+ print(tokenizer.decode(response, skip_special_tokens=True))
46
+ # >> {"message": "I am a helpful assistant, with access to a vast amount of information. I can help you with tasks such as answering questions, providing definitions, translating text, and more. Feel free to ask me anything!"}
47
+ ```
48
 
49
  ### Function Calling
50
  ```python
 
112
  # >> The current temperature in Tokyo is 30 degrees Celsius.
113
  ```
114
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
115
  # Uploaded model
116
 
117
  - **Developed by:** hiieu