Triangle104 commited on
Commit
ad3b043
·
verified ·
1 Parent(s): 40fbcf3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +185 -0
README.md CHANGED
@@ -21,6 +21,191 @@ tags:
21
  This model was converted to GGUF format from [`driaforall/Dria-Agent-a-3B`](https://huggingface.co/driaforall/Dria-Agent-a-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
22
  Refer to the [original model card](https://huggingface.co/driaforall/Dria-Agent-a-3B) for more details on the model.
23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  ## Use with llama.cpp
25
  Install llama.cpp through brew (works on Mac and Linux)
26
 
 
21
  This model was converted to GGUF format from [`driaforall/Dria-Agent-a-3B`](https://huggingface.co/driaforall/Dria-Agent-a-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
22
  Refer to the [original model card](https://huggingface.co/driaforall/Dria-Agent-a-3B) for more details on the model.
23
 
24
+ ---
25
+ Model details:
26
+ -
27
+ Dria-Agent-α are series of large language models trained on top of the Qwen2.5-Coder series, specifically on top of the Qwen/Qwen2.5-Coder-3B-Instruct and Qwen/Qwen2.5-Coder-7B-Instruct models to be used in agentic applications. These models are the first instalment of agent-focused LLMs (hence the α in the naming) we hope to improve with better and more elaborate techniques in subsequent releases.
28
+
29
+ Dria-Agent-α employs Pythonic function calling, which is LLMs using blocks of Python code to interact with provided tools and output actions. This method was inspired by many previous work, including but not limited to DynaSaur, RLEF, ADAS and CAMEL. This way of function calling has a few advantages over traditional JSON-based function calling methods:
30
+
31
+ One-shot Parallel Multiple Function Calls: The model can can utilise many synchronous processes in one chat turn to arrive to a solution, which would require other function calling models multiple turns of conversation.
32
+ Free-form Reasoning and Actions: The model provides reasoning traces freely in natural language and the actions in between ```python ``` blocks, as it already tends to do without special prompting or tuning. This tries to mitigate the possible performance loss caused by imposing specific formats on LLM outputs discussed in Let Me Speak Freely?
33
+ On-the-fly Complex Solution Generation: The solution provided by the model is essentially a Python program with the exclusion of some "risky" builtins like exec, eval and compile (see full list in Quickstart below). This enables the model to implement custom complex logic with conditionals and synchronous pipelines (using the output of one function in the next function's arguments) which would not be possible with the current JSON-based function calling methods (as far as we know).
34
+
35
+ Quickstart
36
+
37
+ import json
38
+ from typing import Any, Dict, List
39
+ from transformers import AutoModelForCausalLM, AutoTokenizer
40
+
41
+ model_name = "driaforall/Dria-Agent-a-3B"
42
+ model = AutoModelForCausalLM.from_pretrained(
43
+ model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True
44
+ )
45
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
46
+
47
+ # Please use our provided prompt for best performance
48
+ SYSTEM_PROMPT = """
49
+ You are an expert AI assistant that specializes in providing Python code to solve the task/problem at hand provided by the user.
50
+
51
+ You can use Python code freely, including the following available functions:
52
+
53
+ <|functions_schema|>
54
+ {{functions_schema}}
55
+ <|end_functions_schema|>
56
+
57
+ The following dangerous builtins are restricted for security:
58
+ - exec
59
+ - eval
60
+ - execfile
61
+ - compile
62
+ - importlib
63
+ - input
64
+ - exit
65
+
66
+ Think step by step and provide your reasoning, outside of the function calls.
67
+ You can write Python code and use the available functions. Provide all your python code in a SINGLE markdown code block like the following:
68
+
69
+ ```python
70
+ result = example_function(arg1, "string")
71
+ result2 = example_function2(result, arg2)
72
+ ```
73
+
74
+ DO NOT use print() statements AT ALL. Avoid mutating variables whenever possible.
75
+ """.strip()
76
+
77
+
78
+ get_sample_data = """
79
+ def check_availability(day: str, start_time: str, end_time: str) -> bool:
80
+ \"\"\"
81
+ Check if a time slot is available on a given day.
82
+
83
+ Args:
84
+ - day: The day to check in YYYY-MM-DD format
85
+ - start_time: Start time in HH:MM format
86
+ - end_time: End time in HH:MM format
87
+
88
+ Returns:
89
+ - True if slot is available, False otherwise
90
+ \"\"\"
91
+ pass
92
+
93
+ def make_appointment(day: str, start_time: str, end_time: str) -> dict:
94
+ \"\"\"
95
+ Make an appointment for a given time slot.
96
+
97
+ Args:
98
+ - day: The day to make appointment in YYYY-MM-DD format
99
+ - start_time: Start time in HH:MM format
100
+ - end_time: End time in HH:MM format
101
+ - title: The title of the appointment
102
+
103
+ Returns:
104
+ - A dictionary with the appointment details and if it's made or not.
105
+ dict keys:
106
+ - day (str): The day the appointment is on, in YYYY-MM-DD format
107
+ - start_time (str): Start time in HH:MM format
108
+ - end_time (str): End time in HH:MM format
109
+ - appointment_made (bool): Whether the appointment is successfully made or not.
110
+ \"\"\"
111
+ pass
112
+
113
+ def add_to_reminders(reminder_text: str) -> bool:
114
+ \"\"\"
115
+ Add a text to reminders.
116
+
117
+ Args:
118
+ - reminder_text: The text to add to reminders
119
+
120
+ Returns:
121
+ - Whether the reminder was successfully created or not.
122
+ \"\"\"
123
+ pass
124
+ """
125
+
126
+ # Helper function to create the system prompt for our model
127
+ def format_prompt(tools: str):
128
+ return SYSTEM_PROMPT.format(functions_schema=tools)
129
+
130
+ system_prompt = SYSTEM_PROMPT.replace("{{functions_schema}}", get_sample_data)
131
+
132
+ USER_QUERY = """
133
+ Can you check if I have tomorrow 10:00-12:00 available and make an appointment for a meeting
134
+ with my thesis supervisor if so? If you made the appointment, please add it to my reminders.
135
+ """
136
+
137
+ messages = [
138
+ {"role": "system", "content": system_prompt},
139
+ {"role": "user", "content": USER_QUERY},
140
+ ]
141
+
142
+ text = tokenizer.apply_chat_template(
143
+ messages,
144
+ tokenize=False,
145
+ add_generation_prompt=True
146
+ )
147
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
148
+
149
+ generated_ids = model.generate(
150
+ **model_inputs,
151
+ max_new_tokens=2048
152
+ )
153
+ generated_ids = [
154
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
155
+ ]
156
+
157
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
158
+ print(response)
159
+
160
+ The output should be something like:
161
+
162
+ # Get today's date and calculate tomorrow's date
163
+ from datetime import datetime, timedelta
164
+ today = datetime.now()
165
+ tomorrow = (today + timedelta(days=1)).strftime("%Y-%m-%d")
166
+
167
+ # Define the time slots
168
+ start_time = "10:00"
169
+ end_time = "12:00"
170
+
171
+ # Check availability first
172
+ is_available = check_availability(tomorrow, start_time, end_time)
173
+
174
+ # Only proceed with making the appointment if it's available
175
+ appointment_result = (
176
+ make_appointment(
177
+ day=tomorrow,
178
+ start_time=start_time,
179
+ end_time=end_time,
180
+ title="Meeting with Thesis Supervisor"
181
+ )
182
+ if is_available
183
+ else {"appointment_made": False}
184
+ )
185
+
186
+ # Add to reminders only if the appointment was made
187
+ if appointment_result["appointment_made"]:
188
+ add_to_reminders("Meeting with Thesis Supervisor scheduled for 10:00 AM tomorrow")
189
+ ```
190
+
191
+ This code will:
192
+ 1. Calculate tomorrow's date in YYYY-MM-DD format
193
+ 2. Check if the 10:00-12:00 slot is available
194
+ 3. If available, make the appointment with the specified details
195
+ 4. If the appointment is successfully made, add a reminder to the system
196
+
197
+ The code handles all error cases implicitly through the boolean returns of the functions. If any step fails, the subsequent steps won't execute, preventing partial or invalid appointments.
198
+
199
+ Evaluation & Performance
200
+
201
+ We evaluate the model on the following benchmarks:
202
+
203
+ Berkeley Function Calling Leaderboard (BFCL)
204
+ MMLU-Pro
205
+ Dria-Pythonic-Agent-Benchmark (DPAB): The benchmark we curated with a synthetic data generation +model-based validation + filtering and manual selection to evaluate LLMs on their Pythonic function calling ability, spanning multiple scenarios and tasks. More detailed information about the benchmark and the Github repo will be released soon.
206
+
207
+ ---
208
+
209
  ## Use with llama.cpp
210
  Install llama.cpp through brew (works on Mac and Linux)
211