Edit model card

Gorilla OpenFunctions v2 - SOTA GGUF

Description

This repo contains State Of The Art quantized GGUF format model files for Gorilla OpenFunctions v2.

Quantization was done with an importance matrix that was trained for ~1M tokens (256 batches of 4096 tokens) of training data from gorilla_openfunctions_v1_train.json.

Everything has been reconverted and quantized with a new importance matrix using llama.cpp from April 29th 2024 onwards, as of commit f4ab2a4 to ensure correct pre-tokenization. The new GGUFs will work with older llama.cpp, but this may not generate correct prompt tokens, please use a recent build to ensure the best possible results!

Prompt template: Gorilla OpenFunctions v2

You are an AI programming assistant, utilizing the Gorilla LLM model, developed by Gorilla LLM, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer.
### Instruction: <<function>>[{"name": "function_name", "description": "Description", "parameters": {...}}, ...]
<<question>>{prompt}
### Response: 

Compatibility

These quantised GGUFv3 files are compatible with llama.cpp from February 27th 2024 onwards, as of commit 0becb22

They are also compatible with many third party UIs and libraries provided they are built using a recent llama.cpp.

Explanation of quantisation methods

Click to see details

The new methods available are:

  • GGML_TYPE_IQ1_S - 1-bit quantization in super-blocks with an importance matrix applied, effectively using 1.56 bits per weight (bpw)
  • GGML_TYPE_IQ1_M - 1-bit quantization in super-blocks with an importance matrix applied, effectively using 1.75 bpw
  • GGML_TYPE_IQ2_XXS - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.06 bpw
  • GGML_TYPE_IQ2_XS - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.31 bpw
  • GGML_TYPE_IQ2_S - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.5 bpw
  • GGML_TYPE_IQ2_M - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.7 bpw
  • GGML_TYPE_IQ3_XXS - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.06 bpw
  • GGML_TYPE_IQ3_XS - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.3 bpw
  • GGML_TYPE_IQ3_S - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.44 bpw
  • GGML_TYPE_IQ3_M - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.66 bpw
  • GGML_TYPE_IQ4_XS - 4-bit quantization in super-blocks with an importance matrix applied, effectively using 4.25 bpw
  • GGML_TYPE_IQ4_NL - 4-bit non-linearly mapped quantization with an importance matrix applied, effectively using 4.5 bpw

Refer to the Provided Files table below to see what files use which methods, and how.

Provided files

Name Quant method Bits Size Max RAM required Use case
gorilla-openfunctions-v2.IQ1_S.gguf IQ1_S 1 1.5 GB 3.5 GB smallest, significant quality loss - TBD: Waiting for this issue to be resolved
gorilla-openfunctions-v2.IQ2_XXS.gguf IQ2_XXS 2 1.8 GB 3.8 GB very small, high quality loss
gorilla-openfunctions-v2.IQ2_XS.gguf IQ2_XS 2 1.9 GB 3.9 GB very small, high quality loss
gorilla-openfunctions-v2.IQ2_S.gguf IQ2_S 2 2.1 GB 4.1 GB small, substantial quality loss
gorilla-openfunctions-v2.IQ2_M.gguf IQ2_M 2 2.2 GB 4.2 GB small, greater quality loss
gorilla-openfunctions-v2.IQ3_XXS.gguf IQ3_XXS 3 2.5 GB 4.5 GB very small, high quality loss
gorilla-openfunctions-v2.IQ3_XS.gguf IQ3_XS 3 2.7 GB 4.7 GB small, substantial quality loss
gorilla-openfunctions-v2.IQ3_S.gguf IQ3_S 3 2.8 GB 4.8 GB small, greater quality loss
gorilla-openfunctions-v2.IQ3_M.gguf IQ3_M 3 3.0 GB 5.0 GB medium, balanced quality - recommended
gorilla-openfunctions-v2.IQ4_XS.gguf IQ4_XS 4 3.4 GB 5.4 GB small, substantial quality loss

Generated importance matrix file: gorilla-openfunctions-v2.imatrix.dat

Note: the above RAM figures assume no GPU offloading with 4K context. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.

Example llama.cpp command

Make sure you are using llama.cpp from commit 0becb22 or later.

./main -ngl 33 -m gorilla-openfunctions-v2.IQ3_M.gguf --color -c 16384 --temp 0 --repeat-penalty 1.1 -p "You are an AI programming assistant, utilizing the Gorilla LLM model, developed by Gorilla LLM, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer.\n### Instruction: <<function>>{functions}\n<<question>>{prompt}\n### Response: "

Change -ngl 33 to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.

Change -c 16384 to the desired sequence length.

If you want to have a chat-style conversation, replace the -p <PROMPT> argument with -i -ins

If you are low on V/RAM try quantizing the K-cache with -ctk q8_0 or even -ctk q4_0 for big memory savings (depending on context size). There is a similar option for V-cache (-ctv), however that is not working yet.

For other parameters and how to use them, please refer to the llama.cpp documentation

How to run from Python code

You can use GGUF models from Python using the llama-cpp-python module.

How to load this model in Python code, using llama-cpp-python

For full documentation, please see: llama-cpp-python docs.

First install the package

Run one of the following commands, according to your system:

# Prebuilt wheel with basic CPU support
pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu
# Prebuilt wheel with NVidia CUDA acceleration
pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu121 (or cu122 etc.)
# Prebuilt wheel with Metal GPU acceleration
pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/metal
# Build base version with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUDA=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# Or with Vulkan acceleration
CMAKE_ARGS="-DLLAMA_VULKAN=on" pip install llama-cpp-python
# Or with Kompute acceleration
CMAKE_ARGS="-DLLAMA_KOMPUTE=on" pip install llama-cpp-python
# Or with SYCL acceleration
CMAKE_ARGS="-DLLAMA_SYCL=on -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx" pip install llama-cpp-python

# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_CUDA=on"
pip install llama-cpp-python

Simple llama-cpp-python example code

from llama_cpp import Llama
from llama_cpp.llama_grammar import LlamaGrammar
import json

# Chat Completion API

grammar = LlamaGrammar.from_json_schema(json.dumps({
    "type": "array",
    "items": {
        "type": "object",
        "required": [ "name", "arguments" ],
        "properties": {
            "name": {
                "type": "string"
            },
            "arguments": {
                "type": "object"
            }
        }
    }
}))

llm = Llama(model_path="./gorilla-openfunctions-v2.IQ3_M.gguf", n_gpu_layers=33, n_ctx=16384)
response = llm.create_chat_completion(
      temperature = 0.0,
      repeat_penalty = 1.1,
      messages = [
        {
          "role": "user",
          "content": "What's the weather like in Oslo and Stockholm?"
        }
      ],
      tools=[{
        "type": "function",
        "function": {
          "name": "get_current_weather",
          "description": "Get the current weather in a given location",
          "parameters": {
            "type": "object",
            "properties": {
              "location": {
                "type": "string",
                "description": "The city and state, e.g. San Francisco, CA"
              },
              "unit": {
                "type": "string",
                "enum": [ "celsius", "fahrenheit" ]
              }
            },
            "required": [ "location" ]
          }
        }
      }],
      grammar = grammar
)
print(json.loads(response["choices"][0]["text"]))

print(llm.create_chat_completion(
      temperature = 0.0,
      repeat_penalty = 1.1,
      messages = [
        {
          "role": "user",
          "content": "What's the weather like in Oslo and Stockholm?"
        },
        { # The tool_calls is from the response to the above with tool_choice active
          "role": "assistant",
          "content": None,
          "tool_calls": [
            {
              "id": "call__0_get_current_weather_cmpl-...",
              "type": "function",
              "function": {
                "name": "get_current_weather",
                "arguments": '{ "location": "Oslo, NO" ,"unit": "celsius"} '
              }
            }
          ]
        },
        { # The tool_call_id is from tool_calls and content is the result from the function call you made
          "role": "tool",
          "content": "20",
          "tool_call_id": "call__0_get_current_weather_cmpl-..."
        }
      ],
      tools=[{
        "type": "function",
        "function": {
          "name": "get_current_weather",
          "description": "Get the current weather in a given location",
          "parameters": {
            "type": "object",
            "properties": {
              "location": {
                "type": "string",
                "description": "The city and state, e.g. San Francisco, CA"
              },
              "unit": {
                "type": "string",
                "enum": [ "celsius", "fahrenheit" ]
              }
            },
            "required": [ "location" ]
          }
        }
      }],
      #tool_choice={
      #  "type": "function",
      #  "function": {
      #    "name": "get_current_weather"
      #  }
      #}
))

Original model card: Gorilla OpenFunctions v2

💡 SoTA for open-source models. On-par with GPT-4.

🚀 Check out the Berkeley Function Calling Leaderboard
📣 Read more in our OpenFunctions v2 release blog

Introduction

Gorilla OpenFunctions extends Large Language Model(LLM) Chat Completion feature to formulate executable APIs call given natural language instructions and API context. With OpenFunctions v2, we now support:

  1. Multiple functions - choose betwen functions
  2. Parallel functions - call the same function N time with different parameter values
  3. Multiple & parallel - both of the above in a single chatcompletion call (one generation)
  4. Relevance detection - when chatting, chat. When asked for function, returns a function
  5. Python - supports string, number, boolean, list, tuple, dict parameter datatypes and Any for those not natively supported.
  6. JAVA - support for byte, short, int, float, double, long, boolean, char, Array, ArrayList, Set, HashMap, Hashtable, Queue, Stack, and Any datatypes.
  7. JavaScript - support for String, Number, Bigint, Boolean, dict (object), Array, Date, and Any datatypes.
  8. REST - native REST support

Performance

Model Accuracy
GPT-4-0125-Preview 83.80%
Gorilla-openfunctions-v2 83.55%
GPT-3.5-turbo 81.63%
Mistral-medium 79.56%
Nexusflow Raven-v2 54.46%
GPT-4-0613 53.49%

Models Available

Model Functionality
gorilla-openfunctions-v2 Multiple, parallel, multiple & parallel, relevance detection, Python + JAVA + JS + REST
gorilla-openfunctions-v1 Parallel functions, and can choose between functions
gorilla-openfunctions-v0 Given a function, and user intent, returns properly formatted json with the right arguments

All of our models are hosted on our Huggingface UC Berkeley gorilla-llm org: gorilla-openfunctions-v2, gorilla-openfunctions-v1, and gorilla-openfunctions-v0.

Training

Gorilla Openfunctions v2 is a 7B parameter model, and is built on top of the deepseek coder LLM. Check out openfunctions-v2 blog to learn more about the data composition and some insights into the training process.

Example Usage (Hosted)

  1. OpenFunctions is compatible with OpenAI Functions
!pip install openai==0.28.1
  1. Point to Gorilla hosted servers
import openai

def get_gorilla_response(prompt="Call me an Uber ride type \"Plus\" in Berkeley at zipcode 94704 in 10 minutes", model="gorilla-openfunctions-v0", functions=[]):
  openai.api_key = "EMPTY"
  openai.api_base = "http://luigi.millennium.berkeley.edu:8000/v1"
  try:
    completion = openai.ChatCompletion.create(
      model="gorilla-openfunctions-v2",
      temperature=0.0,
      messages=[{"role": "user", "content": prompt}],
      functions=functions,
    )
    return completion.choices[0]
  except Exception as e:
    print(e, model, prompt)
  1. Pass the user argument and set of functions, Gorilla OpenFunctions returns a fully formatted json
query = "What's the weather like in the two cities of Boston and San Francisco?"
functions = [
    {
        "name": "get_current_weather",
        "description": "Get the current weather in a given location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The city and state, e.g. San Francisco, CA",
                },
                "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
            },
            "required": ["location"],
        },
    }
]
get_gorilla_response(query, functions=functions)
  1. Expected output NEW

Gorilla returns a readily accessible string AND Open-AI compatible JSON.

{
  "index": 0,
  "message": {
    "role": "assistant",
    "content": "get_current_weather(location='Boston, MA'), get_current_weather(location='San Francisco, CA')",
    "function_call": [
      {
        "name": "get_current_weather",
        "arguments": {
          "location": "Boston, MA"
        }
      },
      {
        "name": "get_current_weather",
        "arguments": {
          "location": "San Francisco, CA"
        }
      }
    ]
  },
  "finish_reason": "stop"
}

We have retained the string functionality that our community loved from OpenFunctions v1 get_current_weather(location='Boston, MA'), get_current_weather(location='San Francisco, CA') above. And Notice the function_call key in the JSON to be OpenAI compatible.

This is possible in OpenFunctions v2, because we ensure that the output includes the name of the argument and not just the value. This enables us to parse the output into a JSON. In those scenarios where the output is not parsable into JSON, we will always return the function call string.

End to End Example

Run the example code in [ofv2_hosted.py](https://github.com/ShishirPatil/gorilla/tree/main/openfunctions) to see how the model works.

python ofv2_hosted.py

Expected Output:

(.py3) shishir@dhcp-132-64:~/Work/Gorilla/openfunctions/$ python ofv2_hosted.py
--------------------
Function call strings(s): get_current_weather(location='Boston, MA'), get_current_weather(location='San Francisco, CA')
--------------------
OpenAI compatible `function_call`: [<OpenAIObject at 0x1139ba890> JSON: 
{
  "name": "get_current_weather",
  "arguments": 
  {
    "location": "Boston, MA"
  }
}, <OpenAIObject at 0x1139ba930> JSON: {
  "name": "get_current_weather",
  "arguments": 
  {
    "location": "San Francisco, CA"
  }
}]

Running OpenFunctions Locally

If you want to Run OpenFunctions locally, here is the prompt format that we used:

def get_prompt(user_query: str, functions: list = []) -> str:
    """
    Generates a conversation prompt based on the user's query and a list of functions.

    Parameters:
    - user_query (str): The user's query.
    - functions (list): A list of functions to include in the prompt.

    Returns:
    - str: The formatted conversation prompt.
    """
    system = "You are an AI programming assistant, utilizing the Gorilla LLM model, developed by Gorilla LLM, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer."
    if len(functions) == 0:
        return f"{system}\n### Instruction: <<question>> {user_query}\n### Response: "
    functions_string = json.dumps(functions)
    return f"{system}\n### Instruction: <<function>>{functions_string}\n<<question>>{user_query}\### Response: "

Further, here is how we format the response:

Install the dependencies with:

pip3 install tree_sitter
git clone https://github.com/tree-sitter/tree-sitter-java.git
git clone https://github.com/tree-sitter/tree-sitter-javascript.git

And you can use the following code to format the response:


from openfunctions_utils import strip_function_calls, parse_function_call

def format_response(response: str):
    """
    Formats the response from the OpenFunctions model.

    Parameters:
    - response (str): The response generated by the LLM.

    Returns:
    - str: The formatted response.
    - dict: The function call(s) extracted from the response.

    """
    function_call_dicts = None
    try:
        response = strip_function_calls(response)
        # Parallel function calls returned as a str, list[dict]
        if len(response) > 1: 
            function_call_dicts = []
            for function_call in response:
                function_call_dicts.append(parse_function_call(function_call))
            response = ", ".join(response)
        # Single function call returned as a str, dict
        else:
            function_call_dicts = parse_function_call(response[0])
            response = response[0]
    except Exception as e:
        # Just faithfully return the generated response str to the user
        pass
    return response, function_call_dicts
        

Note: Use the get_prompt and format_response only if you are hosting it Locally. If you are using the Berkeley hosted models through the Chat-completion API, we do this in the backend, so you don't have to do this. The model is supported in Hugging Face 🤗 Transformers and can be run up locally:

License

Gorilla OpenFunctions v2 is distributed under the Apache 2.0 license. This software incorporates elements from the Deepseek model. Consequently, the licensing of Gorilla OpenFunctions v2 adheres to the Apache 2.0 license, with additional terms as outlined in Appendix A of the Deepseek license.

Contributing

Gorilla is an open source effort from UC Berkeley and we welcome contributors. Please email us your comments, criticism, and questions. More information about the project can be found at https://gorilla.cs.berkeley.edu/

Downloads last month
257
GGUF
Model size
6.91B params
Architecture
llama

1-bit

2-bit

3-bit

4-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for CISCai/gorilla-openfunctions-v2-SOTA-GGUF

Quantized
(7)
this model