giux78 commited on
Commit
d8933a6
1 Parent(s): c1007cb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +239 -193
README.md CHANGED
@@ -1,199 +1,245 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
-
11
-
12
- ## Model Details
13
-
14
- ### Model Description
15
-
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
- ## Uses
37
-
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
 
197
- ## Model Card Contact
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
198
 
199
- [More Information Needed]
 
 
1
  ---
2
  library_name: transformers
3
+ tags:
4
+ - functioncalling
5
+ license: apache-2.0
6
+ language:
7
+ - it
8
+ pipeline_tag: image-to-text
9
  ---
10
 
11
+ ---
12
+ license: apache-2.0
13
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
+ ## Introduction
16
+ Zefiro functioncalling extends Large Language Model(LLM) Chat Completion feature to formulate
17
+ executable APIs call given natural language instructions and API context. With OpenFunctions v2,
18
+ we now support:
19
+ 1. Relevance detection - when chatting, chat. When asked for function, returns a function
20
+ 2. REST - native REST support
21
+
22
+
23
+ ## Models Available
24
+ |Model | Functionality|
25
+ |---|---|
26
+ |zefiro-funcioncalling-v0.3-alpha | Given a function, and user intent, returns properly formatted json with the right arguments|
27
+
28
+ All of our models are hosted on our Huggingface mii-community org: [zefiro-funcioncalling-v0.3-merged](https://huggingface.co/giux78/zefiro-funcioncalling-v0.3-merged).
29
+
30
+ ## Training
31
+
32
+ Zefiro functioncalling alpha is a 7B parameter model, and is built on that is built on [gorilla-llm](https://huggingface.co/gorilla-llm/gorilla-openfunctions-v2) top of the [deepseek coder](https://huggingface.co/deepseek-ai/deepseek-coder-7b-instruct-v1.5) LLM.
33
+
34
+
35
+
36
+ ## Example Usage (Hosted)
37
+
38
+ Please reference `README.md` in https://github.com/ShishirPatil/gorilla/tree/main/openfunctions for file dependencies and used utils.
39
+
40
+ 1. OpenFunctions is compatible with OpenAI Functions
41
+
42
+ ```bash
43
+ !pip install openai==0.28.1
44
+ ```
45
+
46
+ 2. Point to Gorilla hosted servers
47
+
48
+ ```python
49
+ import openai
50
+
51
+ def get_gorilla_response(prompt="Call me an Uber ride type \"Plus\" in Berkeley at zipcode 94704 in 10 minutes", model="gorilla-openfunctions-v0", functions=[]):
52
+ openai.api_key = "EMPTY"
53
+ openai.api_base = "http://luigi.millennium.berkeley.edu:8000/v1"
54
+ try:
55
+ completion = openai.ChatCompletion.create(
56
+ model="gorilla-openfunctions-v2",
57
+ temperature=0.0,
58
+ messages=[{"role": "user", "content": prompt}],
59
+ functions=functions,
60
+ )
61
+ return completion.choices[0]
62
+ except Exception as e:
63
+ print(e, model, prompt)
64
+ ```
65
+
66
+ 3. Pass the user argument and set of functions, Gorilla OpenFunctions returns a fully formatted json
67
+
68
+ ```python
69
+ query = "What's the weather like in the two cities of Boston and San Francisco?"
70
+ functions = [
71
+ {
72
+ "name": "get_current_weather",
73
+ "description": "Get the current weather in a given location",
74
+ "parameters": {
75
+ "type": "object",
76
+ "properties": {
77
+ "location": {
78
+ "type": "string",
79
+ "description": "The city and state, e.g. San Francisco, CA",
80
+ },
81
+ "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
82
+ },
83
+ "required": ["location"],
84
+ },
85
+ }
86
+ ]
87
+ get_gorilla_response(query, functions=functions)
88
+ ```
89
+
90
+ 4. Expected output **NEW**
91
+
92
+ Gorilla returns a readily accessible string **AND** Open-AI compatible JSON.
93
+
94
+ ```python
95
+ {
96
+ "index": 0,
97
+ "message": {
98
+ "role": "assistant",
99
+ "content": "get_current_weather(location='Boston, MA'), get_current_weather(location='San Francisco, CA')",
100
+ "function_call": [
101
+ {
102
+ "name": "get_current_weather",
103
+ "arguments": {
104
+ "location": "Boston, MA"
105
+ }
106
+ },
107
+ {
108
+ "name": "get_current_weather",
109
+ "arguments": {
110
+ "location": "San Francisco, CA"
111
+ }
112
+ }
113
+ ]
114
+ },
115
+ "finish_reason": "stop"
116
+ }
117
+
118
+ ```
119
+
120
+ We have retained the string functionality that our community loved from OpenFunctions v1 `get_current_weather(location='Boston, MA'), get_current_weather(location='San Francisco, CA')` above. And Notice the `function_call` key in the JSON to be OpenAI compatible.
121
+
122
+
123
+ This is possible in OpenFunctions v2, because we ensure that the output includes the name of the argument and not just the value. This enables us to parse the output into a JSON. In those scenarios where the output is not parsable into JSON, we will always return the function call string.
124
+
125
+ ### End to End Example
126
+
127
+ Run the example code in `[inference_hosted.py](https://github.com/ShishirPatil/gorilla/tree/main/openfunctions)` to see how the model works.
128
+
129
+ ```bash
130
+ python inference_hosted.py
131
+ ```
132
+
133
+ Expected Output:
134
+
135
+ ```bash
136
+ (.py3) shishir@dhcp-132-64:~/Work/Gorilla/openfunctions/$ python inference_hosted.py
137
+ --------------------
138
+ Function call strings(s): get_current_weather(location='Boston, MA'), get_current_weather(location='San Francisco, CA')
139
+ --------------------
140
+ OpenAI compatible `function_call`: [<OpenAIObject at 0x1139ba890> JSON:
141
+ {
142
+ "name": "get_current_weather",
143
+ "arguments":
144
+ {
145
+ "location": "Boston, MA"
146
+ }
147
+ }, <OpenAIObject at 0x1139ba930> JSON: {
148
+ "name": "get_current_weather",
149
+ "arguments":
150
+ {
151
+ "location": "San Francisco, CA"
152
+ }
153
+ }]
154
+ ```
155
+
156
+
157
+ ## Running OpenFunctions Locally
158
+
159
+ If you want to Run OpenFunctions locally, here is the prompt format that we used:
160
+
161
+ ```python
162
+ def get_prompt(user_query: str, functions: list = []) -> str:
163
+ """
164
+ Generates a conversation prompt based on the user's query and a list of functions.
165
+
166
+ Parameters:
167
+ - user_query (str): The user's query.
168
+ - functions (list): A list of functions to include in the prompt.
169
+
170
+ Returns:
171
+ - str: The formatted conversation prompt.
172
+ """
173
+ system = "You are an AI programming assistant, utilizing the Gorilla LLM model, developed by Gorilla LLM, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer."
174
+ if len(functions) == 0:
175
+ return f"{system}\n### Instruction: <<question>> {user_query}\n### Response: "
176
+ functions_string = json.dumps(functions)
177
+ return f"{system}\n### Instruction: <<function>>{functions_string}\n<<question>>{user_query}\n### Response: "
178
+ ```
179
+
180
+ Further, here is how we format the response:
181
+
182
+ Install the dependencies with:
183
+
184
+ ```bash
185
+ pip3 install tree_sitter
186
+ git clone https://github.com/tree-sitter/tree-sitter-java.git
187
+ git clone https://github.com/tree-sitter/tree-sitter-javascript.git
188
+ ```
189
+
190
+ And you can use the following code to format the response:
191
+
192
+ ```python
193
+
194
+ from openfunctions_utils import strip_function_calls, parse_function_call
195
+
196
+ def format_response(response: str):
197
+ """
198
+ Formats the response from the OpenFunctions model.
199
+
200
+ Parameters:
201
+ - response (str): The response generated by the LLM.
202
+
203
+ Returns:
204
+ - str: The formatted response.
205
+ - dict: The function call(s) extracted from the response.
206
+
207
+ """
208
+ function_call_dicts = None
209
+ try:
210
+ response = strip_function_calls(response)
211
+ # Parallel function calls returned as a str, list[dict]
212
+ if len(response) > 1:
213
+ function_call_dicts = []
214
+ for function_call in response:
215
+ function_call_dicts.append(parse_function_call(function_call))
216
+ response = ", ".join(response)
217
+ # Single function call returned as a str, dict
218
+ else:
219
+ function_call_dicts = parse_function_call(response[0])
220
+ response = response[0]
221
+ except Exception as e:
222
+ # Just faithfully return the generated response str to the user
223
+ pass
224
+ return response, function_call_dicts
225
+
226
+ ```
227
+
228
+ In the current directory, run the example code in `inference_local.py` to see how the model works.
229
+
230
+ ```bash
231
+ python inference_local.py
232
+ ```
233
+
234
+ **Note:** Use the `get_prompt` and `format_response` only if you are hosting it Locally. If you are using the Berkeley hosted models through the Chat-completion API, we do this in the backend, so you don't have to do this. The model is supported in Hugging Face 🤗 Transformers and can be run up locally:
235
+
236
+
237
+ ## License
238
+
239
+ Gorilla OpenFunctions v2 is distributed under the Apache 2.0 license. This software incorporates elements from the Deepseek model. Consequently, the licensing of Gorilla OpenFunctions v2 adheres to the Apache 2.0 license, with additional terms as outlined in [Appendix A](https://github.com/deepseek-ai/DeepSeek-LLM/blob/6712a86bfb7dd25c73383c5ad2eb7a8db540258b/LICENSE-MODEL) of the Deepseek license.
240
+
241
+
242
+ ## Contributing
243
 
244
+ Gorilla is an open source effort from UC Berkeley and we welcome contributors.
245
+ Please email us your comments, criticism, and questions. More information about the project can be found at [https://gorilla.cs.berkeley.edu/](https://gorilla.cs.berkeley.edu/)