pankajmathur commited on
Commit
c938048
·
verified ·
1 Parent(s): 8f2fb13

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -71,7 +71,7 @@ gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
71
  quantized_model.generate(**gen_input)
72
  ```
73
 
74
- Below shows a code example on how to do a tool use with this model and tranformer library, Since **orca_mini_v8_0_70b** based upon LLaMA-3.3 so it supports multiple tool use formats. You can see a full guide to prompt formatting [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/).
75
 
76
  Tool use is also supported through [chat templates](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling) in Transformers.
77
  Here is a quick example showing a single simple tool:
 
71
  quantized_model.generate(**gen_input)
72
  ```
73
 
74
+ Below shows a code example on how to do a tool use with this model and tranformer library, Since **orca_mini_v8_1_70b** is based upon LLaMA-3.3 so it supports multiple tool use formats. You can see a full guide to prompt formatting [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/).
75
 
76
  Tool use is also supported through [chat templates](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling) in Transformers.
77
  Here is a quick example showing a single simple tool: