pankajmathur commited on
Commit
8f2fb13
1 Parent(s): 54fedfc

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +117 -0
README.md ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.3
3
+ datasets:
4
+ - pankajmathur/orca_mini_v1_dataset
5
+ language:
6
+ - en
7
+ base_model:
8
+ - meta-llama/Llama-3.3-70B-Instruct
9
+ library_name: transformers
10
+ ---
11
+
12
+ # Model Name: orca_mini_v8_1_Llama-3.3-70B-Instruct
13
+
14
+ **orca_mini_v8_1_Llama-3.3-70B-Instruct is trained with various SFT Datasets**
15
+
16
+ <img src="https://huggingface.co/pankajmathur/orca_mini_v5_8b/resolve/main/orca_minis_small.jpeg" width="auto" />
17
+
18
+ <strong>
19
+ Passionate about Generative AI? I help companies to privately train and deploy custom use case specific LLM/MLLM affordably. For startups, I can even assist with securing GPU grants to get you started. Let's chat!
20
+
21
+ <a href="https://www.linkedin.com/in/pankajam" target="_blank">https://www.linkedin.com/in/pankajam</a> Looking forward to connecting!
22
+ </strong>
23
+
24
+ <br>
25
+
26
+ ### NOTICE
27
+ By providing proper credit and attribution, you are granted permission to use this model as a foundational base for further Full fine tuning, DPO, PPO or ORPO tuning and any kind of Merges.
28
+ I actively encourage users to customize and enhance the model according to their specific needs, as this version is designed to be a comprehensive general model.
29
+ Dive in and innovate!
30
+
31
+
32
+ ### Example Usage
33
+ Here is the Llama3 prompt format
34
+ ```
35
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
36
+ You are Orca Mini, a helpful AI assistant.<|eot_id|>
37
+ <|start_header_id|>user<|end_header_id|>
38
+ Hello Orca Mini, what can you do for me?<|eot_id|>
39
+ <|start_header_id|>assistant<|end_header_id|>
40
+ ```
41
+
42
+ Below shows a code example on how to use this model in default(bf16) format
43
+
44
+ ```python
45
+ from transformers import AutoModel, AutoTokenizer
46
+ model_slug = "pankajmathur/orca_mini_v8_1_70b"
47
+ model = AutoModel.from_pretrained(model_slug)
48
+ tokenizer = AutoTokenizer.from_pretrained(model_slug)
49
+ messages = [
50
+ {"role": "system", "content": "You are Orca Mini, a helpful AI assistant."},
51
+ {"role": "user", "content": "Hello Orca Mini, what can you do for me?"}
52
+ ]
53
+ gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
54
+ model.generate(**gen_input)
55
+ ```
56
+
57
+ Below shows a code example on how to use this model in 4-bit format via bitsandbytes library
58
+
59
+ ```python
60
+ from transformers import AutoModel, AutoTokenizer, BitsAndBytesConfig
61
+ model_slug = "pankajmathur/orca_mini_v8_1_70b"
62
+ quantization_config = BitsAndBytesConfig(load_in_4bit=True)
63
+ quantized_model = AutoModelForCausalLM.from_pretrained(
64
+ model_slug, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config)
65
+ tokenizer = AutoTokenizer.from_pretrained(model_slug)
66
+ messages = [
67
+ {"role": "system", "content": "You are Orca Mini, a helpful AI assistant."},
68
+ {"role": "user", "content": "Hello Orca Mini, what can you do for me?"}
69
+ ]
70
+ gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
71
+ quantized_model.generate(**gen_input)
72
+ ```
73
+
74
+ Below shows a code example on how to do a tool use with this model and tranformer library, Since **orca_mini_v8_0_70b** based upon LLaMA-3.3 so it supports multiple tool use formats. You can see a full guide to prompt formatting [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/).
75
+
76
+ Tool use is also supported through [chat templates](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling) in Transformers.
77
+ Here is a quick example showing a single simple tool:
78
+
79
+ ```python
80
+ # First, define a tool
81
+ def get_current_temperature(location: str) -> float:
82
+ """
83
+ Get the current temperature at a location.
84
+
85
+ Args:
86
+ location: The location to get the temperature for, in the format "City, Country"
87
+ Returns:
88
+ The current temperature at the specified location in the specified units, as a float.
89
+ """
90
+ return 22. # A real function should probably actually get the temperature!
91
+
92
+ # Next, create a chat and apply the chat template
93
+ messages = [
94
+ {"role": "system", "content": "You are a bot that responds to weather queries."},
95
+ {"role": "user", "content": "Hey, what's the temperature in Paris right now?"}
96
+ ]
97
+
98
+ inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True)
99
+ ```
100
+
101
+ You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so:
102
+
103
+ ```python
104
+ tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}}
105
+ messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]})
106
+ ```
107
+
108
+ and then call the tool and append the result, with the `tool` role, like so:
109
+
110
+ ```python
111
+ messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"})
112
+ ```
113
+
114
+ After that, you can `generate()` again to let the model use the tool result in the chat. Note that this was a very brief introduction to tool calling - for more information,
115
+ see the [LLaMA prompt format docs](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/) and the Transformers [tool use documentation](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling).
116
+
117
+