pankajmathur
commited on
Commit
•
1674674
1
Parent(s):
a2a7c58
Update README.md
Browse files
README.md
CHANGED
@@ -71,7 +71,9 @@ gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
|
|
71 |
quantized_model.generate(**gen_input)
|
72 |
```
|
73 |
|
74 |
-
|
|
|
|
|
75 |
|
76 |
Tool use is also supported through [chat templates](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling) in Transformers.
|
77 |
Here is a quick example showing a single simple tool:
|
|
|
71 |
quantized_model.generate(**gen_input)
|
72 |
```
|
73 |
|
74 |
+
Below shows a code example on how to do a tool use with this model and tranformer library
|
75 |
+
|
76 |
+
Since **orca_mini_v8_0_70b** based upon LLaMA-3.3, it supports multiple tool use formats. You can see a full guide to prompt formatting [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/).
|
77 |
|
78 |
Tool use is also supported through [chat templates](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling) in Transformers.
|
79 |
Here is a quick example showing a single simple tool:
|