akjindal53244
commited on
Commit
•
c671c51
1
Parent(s):
216298b
Update README.md
Browse files
README.md
CHANGED
@@ -95,6 +95,7 @@ Llama-3.1-Storm-8B is a powerful generalist model useful for diverse application
|
|
95 |
1. `BF16`: [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
|
96 |
2. ⚡ `FP8`: [Llama-3.1-Storm-8B-FP8-Dynamic](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic)
|
97 |
3. ⚡ `GGUF`: [Llama-3.1-Storm-8B-GGUF](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-GGUF)
|
|
|
98 |
|
99 |
|
100 |
## 💻 How to Use the Model
|
@@ -238,7 +239,7 @@ Here are the available functions:
|
|
238 |
<tools>{}</tools>
|
239 |
|
240 |
For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
|
241 |
-
<tool_call>{
|
242 |
|
243 |
# Convert the tools list to a string representation
|
244 |
tools_str = json.dumps(tools_list, ensure_ascii=False)
|
@@ -285,6 +286,58 @@ prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tok
|
|
285 |
print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: <tool_call>{'tool_name': 'web_chain_details', 'tool_arguments': {'chain_slug': 'ethereum'}}</tool_call>
|
286 |
```
|
287 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
288 |
|
289 |
## Alignment Note
|
290 |
While **Llama-3.1-Storm-8B** did not undergo an explicit model alignment process, it may still retain some alignment properties inherited from the Meta-Llama-3.1-8B-Instruct model.
|
|
|
95 |
1. `BF16`: [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
|
96 |
2. ⚡ `FP8`: [Llama-3.1-Storm-8B-FP8-Dynamic](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic)
|
97 |
3. ⚡ `GGUF`: [Llama-3.1-Storm-8B-GGUF](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-GGUF)
|
98 |
+
4. Ollama: `ollama run ajindal/llama3.1-storm:8b`
|
99 |
|
100 |
|
101 |
## 💻 How to Use the Model
|
|
|
239 |
<tools>{}</tools>
|
240 |
|
241 |
For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
|
242 |
+
<tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>"""
|
243 |
|
244 |
# Convert the tools list to a string representation
|
245 |
tools_str = json.dumps(tools_list, ensure_ascii=False)
|
|
|
286 |
print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: <tool_call>{'tool_name': 'web_chain_details', 'tool_arguments': {'chain_slug': 'ethereum'}}</tool_call>
|
287 |
```
|
288 |
|
289 |
+
#### Use with [Ollma](https://ollama.com/)
|
290 |
+
```
|
291 |
+
import ollama
|
292 |
+
|
293 |
+
tools = [{
|
294 |
+
'type': 'function',
|
295 |
+
'function': {
|
296 |
+
'name': 'get_current_weather',
|
297 |
+
'description': 'Get the current weather for a city',
|
298 |
+
'parameters': {
|
299 |
+
'type': 'object',
|
300 |
+
'properties': {
|
301 |
+
'city': {
|
302 |
+
'type': 'string',
|
303 |
+
'description': 'The name of the city',
|
304 |
+
},
|
305 |
+
},
|
306 |
+
'required': ['city'],
|
307 |
+
},
|
308 |
+
},
|
309 |
+
},
|
310 |
+
{
|
311 |
+
'type': 'function',
|
312 |
+
'function': {
|
313 |
+
'name': 'get_places_to_vist',
|
314 |
+
'description': 'Get places to visit in a city',
|
315 |
+
'parameters': {
|
316 |
+
'type': 'object',
|
317 |
+
'properties': {
|
318 |
+
'city': {
|
319 |
+
'type': 'string',
|
320 |
+
'description': 'The name of the city',
|
321 |
+
},
|
322 |
+
},
|
323 |
+
'required': ['city'],
|
324 |
+
},
|
325 |
+
},
|
326 |
+
},
|
327 |
+
]
|
328 |
+
|
329 |
+
response = ollama.chat(
|
330 |
+
model='ajindal/llama3.1-storm:8b',
|
331 |
+
messages=[
|
332 |
+
{'role': 'system', 'content': 'Do not answer to nay vulgar questions.'},
|
333 |
+
{'role': 'user', 'content': 'What is the weather in Toronto and San Francisco?'}
|
334 |
+
],
|
335 |
+
tools=tools
|
336 |
+
)
|
337 |
+
|
338 |
+
print(response['message']) # Expected Response: {'role': 'assistant', 'content': "<tool_call>{'tool_name': 'get_current_weather', 'tool_arguments': {'city': 'Toronto'}}</tool_call>"}
|
339 |
+
```
|
340 |
+
|
341 |
|
342 |
## Alignment Note
|
343 |
While **Llama-3.1-Storm-8B** did not undergo an explicit model alignment process, it may still retain some alignment properties inherited from the Meta-Llama-3.1-8B-Instruct model.
|