pankajmathur commited on
Commit
a2a7c58
1 Parent(s): 72f92e1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +104 -190
README.md CHANGED
@@ -8,196 +8,110 @@ base_model:
8
  - meta-llama/Llama-3.3-70B-Instruct
9
  library_name: transformers
10
  ---
11
- # Model Card for Model ID
12
 
13
- orca_mini_v8_0_Llama-3.3-70B-Instruct
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
- ## Model Details
16
- pankajmathur/orca_mini_v8_0_70b
17
 
18
- ### Model Description
19
-
20
- <!-- Provide a longer summary of what this model is. -->
21
-
22
-
23
-
24
- - **Developed by:** [More Information Needed]
25
- - **Funded by [optional]:** [More Information Needed]
26
- - **Shared by [optional]:** [More Information Needed]
27
- - **Model type:** [More Information Needed]
28
- - **Language(s) (NLP):** [More Information Needed]
29
- - **License:** [More Information Needed]
30
- - **Finetuned from model [optional]:** [More Information Needed]
31
-
32
- ### Model Sources [optional]
33
-
34
- <!-- Provide the basic links for the model. -->
35
-
36
- - **Repository:** [More Information Needed]
37
- - **Paper [optional]:** [More Information Needed]
38
- - **Demo [optional]:** [More Information Needed]
39
-
40
- ## Uses
41
-
42
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
43
-
44
- ### Direct Use
45
-
46
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
-
48
- [More Information Needed]
49
-
50
- ### Downstream Use [optional]
51
-
52
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
53
-
54
- [More Information Needed]
55
-
56
- ### Out-of-Scope Use
57
-
58
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
59
-
60
- [More Information Needed]
61
-
62
- ## Bias, Risks, and Limitations
63
-
64
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
65
-
66
- [More Information Needed]
67
-
68
- ### Recommendations
69
-
70
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
71
-
72
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
73
-
74
- ## How to Get Started with the Model
75
-
76
- Use the code below to get started with the model.
77
-
78
- [More Information Needed]
79
-
80
- ## Training Details
81
-
82
- ### Training Data
83
-
84
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
85
-
86
- [More Information Needed]
87
-
88
- ### Training Procedure
89
-
90
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
91
-
92
- #### Preprocessing [optional]
93
-
94
- [More Information Needed]
95
-
96
-
97
- #### Training Hyperparameters
98
-
99
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
100
-
101
- #### Speeds, Sizes, Times [optional]
102
-
103
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
104
-
105
- [More Information Needed]
106
-
107
- ## Evaluation
108
-
109
- <!-- This section describes the evaluation protocols and provides the results. -->
110
-
111
- ### Testing Data, Factors & Metrics
112
-
113
- #### Testing Data
114
-
115
- <!-- This should link to a Dataset Card if possible. -->
116
-
117
- [More Information Needed]
118
-
119
- #### Factors
120
-
121
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
122
-
123
- [More Information Needed]
124
-
125
- #### Metrics
126
-
127
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
128
-
129
- [More Information Needed]
130
-
131
- ### Results
132
-
133
- [More Information Needed]
134
-
135
- #### Summary
136
-
137
-
138
-
139
- ## Model Examination [optional]
140
-
141
- <!-- Relevant interpretability work for the model goes here -->
142
-
143
- [More Information Needed]
144
-
145
- ## Environmental Impact
146
-
147
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
148
-
149
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
150
-
151
- - **Hardware Type:** [More Information Needed]
152
- - **Hours used:** [More Information Needed]
153
- - **Cloud Provider:** [More Information Needed]
154
- - **Compute Region:** [More Information Needed]
155
- - **Carbon Emitted:** [More Information Needed]
156
-
157
- ## Technical Specifications [optional]
158
-
159
- ### Model Architecture and Objective
160
-
161
- [More Information Needed]
162
-
163
- ### Compute Infrastructure
164
-
165
- [More Information Needed]
166
-
167
- #### Hardware
168
-
169
- [More Information Needed]
170
-
171
- #### Software
172
-
173
- [More Information Needed]
174
-
175
- ## Citation [optional]
176
-
177
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
178
-
179
- **BibTeX:**
180
-
181
- [More Information Needed]
182
-
183
- **APA:**
184
-
185
- [More Information Needed]
186
-
187
- ## Glossary [optional]
188
-
189
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
190
-
191
- [More Information Needed]
192
-
193
- ## More Information [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Authors [optional]
198
-
199
- [More Information Needed]
200
-
201
- ## Model Card Contact
202
-
203
- [More Information Needed]
 
8
  - meta-llama/Llama-3.3-70B-Instruct
9
  library_name: transformers
10
  ---
 
11
 
12
+ # Model Name: orca_mini_v8_0_Llama-3.3-70B-Instruct
13
+
14
+ **orca_mini_v8_0_Llama-3.3-70B-Instruct is trained with various SFT Datasets**
15
+
16
+ <img src="https://huggingface.co/pankajmathur/orca_mini_v5_8b/resolve/main/orca_minis_small.jpeg" width="auto" />
17
+
18
+ <strong>
19
+ Passionate about Generative AI? I help companies to privately train and deploy custom use case specific LLM/MLLM affordably. For startups, I can even assist with securing GPU grants to get you started. Let's chat!
20
+
21
+ <a href="https://www.linkedin.com/in/pankajam" target="_blank">https://www.linkedin.com/in/pankajam</a> Looking forward to connecting!
22
+ </strong>
23
+
24
+ <br>
25
+
26
+ ### NOTICE
27
+ By providing proper credit and attribution, you are granted permission to use this model as a foundational base for further Full fine tuning, DPO, PPO or ORPO tuning and any kind of Merges.
28
+ I actively encourage users to customize and enhance the model according to their specific needs, as this version is designed to be a comprehensive general model.
29
+ Dive in and innovate!
30
+
31
+
32
+ ### Example Usage
33
+ Here is the Llama3 prompt format
34
+ ```
35
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
36
+ You are Orca Mini, a helpful AI assistant.<|eot_id|>
37
+ <|start_header_id|>user<|end_header_id|>
38
+ Hello Orca Mini, what can you do for me?<|eot_id|>
39
+ <|start_header_id|>assistant<|end_header_id|>
40
+ ```
41
+
42
+ Below shows a code example on how to use this model in default(bf16) format
43
+
44
+ ```python
45
+ from transformers import AutoModel, AutoTokenizer
46
+ model_slug = "pankajmathur/orca_mini_v8_0_70b"
47
+ model = AutoModel.from_pretrained(model_slug)
48
+ tokenizer = AutoTokenizer.from_pretrained(model_slug)
49
+ messages = [
50
+ {"role": "system", "content": "You are Orca Mini, a helpful AI assistant."},
51
+ {"role": "user", "content": "Hello Orca Mini, what can you do for me?"}
52
+ ]
53
+ gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
54
+ model.generate(**gen_input)
55
+ ```
56
+
57
+ Below shows a code example on how to use this model in 4-bit format via bitsandbytes library
58
+
59
+ ```python
60
+ from transformers import AutoModel, AutoTokenizer, BitsAndBytesConfig
61
+ model_slug = "pankajmathur/orca_mini_v8_0_70b"
62
+ quantization_config = BitsAndBytesConfig(load_in_4bit=True)
63
+ quantized_model = AutoModelForCausalLM.from_pretrained(
64
+ model_slug, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config)
65
+ tokenizer = AutoTokenizer.from_pretrained(model_slug)
66
+ messages = [
67
+ {"role": "system", "content": "You are Orca Mini, a helpful AI assistant."},
68
+ {"role": "user", "content": "Hello Orca Mini, what can you do for me?"}
69
+ ]
70
+ gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
71
+ quantized_model.generate(**gen_input)
72
+ ```
73
+
74
+ Below shows a code example on how to do a tool use with this model and tranformer library, Since **orca_mini_v8_0_70b** based upon LLaMA-3.3 so it supports multiple tool use formats. You can see a full guide to prompt formatting [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/).
75
+
76
+ Tool use is also supported through [chat templates](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling) in Transformers.
77
+ Here is a quick example showing a single simple tool:
78
+
79
+ ```python
80
+ # First, define a tool
81
+ def get_current_temperature(location: str) -> float:
82
+ """
83
+ Get the current temperature at a location.
84
+
85
+ Args:
86
+ location: The location to get the temperature for, in the format "City, Country"
87
+ Returns:
88
+ The current temperature at the specified location in the specified units, as a float.
89
+ """
90
+ return 22. # A real function should probably actually get the temperature!
91
+
92
+ # Next, create a chat and apply the chat template
93
+ messages = [
94
+ {"role": "system", "content": "You are a bot that responds to weather queries."},
95
+ {"role": "user", "content": "Hey, what's the temperature in Paris right now?"}
96
+ ]
97
+
98
+ inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True)
99
+ ```
100
+
101
+ You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so:
102
+
103
+ ```python
104
+ tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}}
105
+ messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]})
106
+ ```
107
+
108
+ and then call the tool and append the result, with the `tool` role, like so:
109
+
110
+ ```python
111
+ messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"})
112
+ ```
113
+
114
+ After that, you can `generate()` again to let the model use the tool result in the chat. Note that this was a very brief introduction to tool calling - for more information,
115
+ see the [LLaMA prompt format docs](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/) and the Transformers [tool use documentation](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling).
116
 
 
 
117