EdgeRunner-Command-7B
We’re excited to announce the release of EdgeRunner Command, a cutting-edge 7B parameter language model designed specifically for function calling and mission tasks. Initialized from our EdgeRunner-Tactical-7B , EdgeRunner Command offers performance comparable to much larger models while maintaining efficiency and speed at the edge.
The model is formatted to support ChatML and specializes in function calling capabilities when interacting with transformers.
Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling.
You should use the system role with this message, followed by a function signature json as this example shows here.
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required:
[AVAILABLE_TOOLS] [{"name": "search", "description": "Searches the web for the given text and returns the top 5 results.", "parameters": {"type": "object", "properties": {"text": {"type": "string", "description": "The text to search for."}}, "required": ["text"]}}][/AVAILABLE_TOOLS]<|im_end|>
To complete the function call, create a user prompt that follows the above system prompt, like so:
<|im_start|>user
How to train a dragon?<|im_end|>
The model will then generate a tool call, which your inference code must parse, and plug into a function :
|im_start|>assistant
[TOOL_CALLS] [{ "name": "search", "arguments": {"text": "how to train a dragon"}}]<|im_end|>
Once you parse the tool call, call the function and get the returned values for the call, and pass it back in as a new role, tool
like so:
<|im_start|>tool
[TOOL_RESULTS] [{"name": "search", "content": "..."}][/TOOL_RESULTS]<|im_end|>
The assistant will then read in that data from the function's response, and generate a natural language response:
<|im_start|>assistant
According to my search, training a dragon is not something ....<|im_end|>
Usage
To use this example, you'll need transformers
version 4.42.0 or higher. Please see the
function calling guide
in the transformers
docs for more information.
Example Code
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "edgerunner-ai/EdgeRunner-Command-7B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
def get_current_weather(location: str, format: str):
"""
Get the current weather.
Args:
location: The city and state, e.g. San Francisco, CA
format: The temperature unit to use. Infer this from the user's location. (choices: ["celsius", "fahrenheit"])
"""
pass
conversation = [{"role": "user", "content": "What's the weather like in Paris?"}]
tools = [get_current_weather]
# Render the tool use prompt as a string:
tool_use_prompt = tokenizer.apply_chat_template(
conversation,
tools=tools,
tokenize=False,
add_generation_prompt=True,
)
inputs = tokenizer(tool_use_prompt, return_tensors="pt")
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Note that, this example does not show a complete cycle of calling a tool and adding the tool call and tool results to the chat history so that the model can use them in its next generation. For a full tool calling example, please see the function calling guide
Benchmarks
Berkeley Function Calling Benchmark Results
Test Name | Accuracy |
---|---|
multiple_function | 0.94 |
parallel_multiple_function | 0.83 |
parallel_function | 0.77 |
simple | 0.91 |
Other Benchmark:
Benchmark | Score |
---|---|
Arena Hard | 31.99 |
MMLU-Redux | 67.82 |
GSM | 80.89 |
MT-Bench | 8.32 |
- Downloads last month
- 391
Model tree for edgerunner-ai/EdgeRunner-Command-7B
Base model
edgerunner-ai/EdgeRunner-Tactical-7B