sanjay920's picture
Update README.md
f504ef3 verified
|
raw
history blame
8.91 kB
metadata
license: apache-2.0
model-index:
  - name: Rubra-Mistral-7B-Instruct-v0.2
    results:
      - task:
          type: text-generation
        dataset:
          type: MMLU
          name: MMLU
        metrics:
          - type: 5-shot
            value: 58.9
            verified: false
      - task:
          type: text-generation
        dataset:
          type: GPQA
          name: GPQA
        metrics:
          - type: 0-shot
            value: 29.91
            verified: false
      - task:
          type: text-generation
        dataset:
          type: GSM-8K
          name: GSM-8K
        metrics:
          - type: 8-shot, CoT
            value: 34.12
            verified: false
      - task:
          type: text-generation
        dataset:
          type: MATH
          name: MATH
        metrics:
          - type: 4-shot, CoT
            value: 8.36
            verified: false
      - task:
          type: text-generation
        dataset:
          type: MT-bench
          name: MT-bench
        metrics:
          - type: GPT-4 as Judge
            value: 7.36
            verified: false
tags:
  - function-calling
  - tool-calling
  - agentic
  - rubra

Rubra Mistral-7B-Instruct-v0.2

Model description

The model is the result of further post-training mistralai/Mistral-7B-Instruct-v0.2. It is capable of complex tool/function calling.

Training Data

The model was post-trained (freeze tuned & DPO) on a proprietary dataset consisting of diverse function calling, chat, and instruct data.

How to use

You can use the model with the Hugging Face transformers and the rubra library rubra-tools as follows:

pip install rubra_tools torch==2.3.0 transformers

1. Load the Model

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
from rubra_tools import preprocess_input, postprocess_output

model_id = "rubra-ai/Meta-Llama-3-8B-Instruct"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)

2. Define Functions

Here we use 4 functions for a simple math chaining question:

functions = [
    {
            'type': 'function',
            'function': {
                'name': 'addition',
                'description': "Adds two numbers together",
                'parameters': {
                    'type': 'object',
                    'properties': {
                        'a': {
                            'description': 'First number to add',
                            'type': 'string'
                        },
                        'b': {
                            'description': 'Second number to add',
                            'type': 'string'
                        }
                    },
                    'required': []
                }
            }
        },
        {
            'type': 'function',
            'function': {
                'name': 'subtraction',
                'description': "Subtracts two numbers",
                'parameters': {
                    'type': 'object',
                    'properties': {
                        'a': {
                            'description': 'First number to be subtracted from',
                            'type': 'string'
                        },
                        'b': {
                            'description': 'Number to subtract',
                            'type': 'string'
                        }
                    },
                    'required': []
                }
            }
        },
        {
            'type': 'function',
            'function': {
                'name': 'multiplication',
                'description': "Multiply two numbers together",
                'parameters': {
                    'type': 'object',
                    'properties': {
                        'a': {
                            'description': 'First number to multiply',
                            'type': 'string'
                        },
                        'b': {
                            'description': 'Second number to multiply',
                            'type': 'string'
                        }
                    },
                    'required': []
                }
            }
        },
        {
            'type': 'function',
            'function': {
                'name': 'division',
                'description': "Divide two numbers",
                'parameters': {
                    'type': 'object',
                    'properties': {
                        'a': {
                            'description': 'First number to use as the dividend',
                            'type': 'string'
                        },
                        'b': {
                            'description': 'Second number to use as the divisor',
                            'type': 'string'
                        }
                    },
                    'required': []
                }
            }
        },
]

3. Start the conversation

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What is the result of four plus six? Take the result and add 2? Then multiply by 5 and then divide by two"},
]

def run_model(messages, functions):
    ## Format messages in Rubra's format
    formatted_msgs = preprocess_input(msgs=messages, tools=functions)

    input_ids = tokenizer.apply_chat_template(
        formatted_msgs,
        add_generation_prompt=True,
        return_tensors="pt"
    ).to(model.device)

    terminators = [
        tokenizer.eos_token_id,
        tokenizer.convert_tokens_to_ids("<|eot_id|>")
    ]

    outputs = model.generate(
        input_ids,
        max_new_tokens=1000,
        eos_token_id=terminators,
        do_sample=True,
        temperature=0.1,
        top_p=0.9,
    )
    response = outputs[0][input_ids.shape[-1]:]
    raw_output = tokenizer.decode(response, skip_special_tokens=True)
    return raw_output

raw_output = run_model(messages, functions)
# Check if there's a function call
function_call = postprocess_output(raw_output)
if function_call:
    print(function_call)
else:
    print(raw_output)

You should see this output, which is a function call made by the AI assistant:

[{'id': 'fc65a533', 'function': {'name': 'addition', 'arguments': '{"a": "4", "b": "6"}'}, 'type': 'function'}]

4. Add Executed Tool Result to Message History & Continue the Conversation

if function_call:
    # append the assistant tool call msg
    messages.append({"role": "assistant", "tool_calls": function_call})
    # append the result of the tool call in openai format, in this case, the value of add 6 to 4 is 10.
    messages.append({'role': 'tool', 'tool_call_id': function_call[0]["id"], 'name': function_call[0]["function"]["name"], 'content': '10'})
    raw_output = run_model(messages, functions)
    # Check if there's a function call
    function_call = postprocess_output(raw_output)
    if function_call:
        print(function_call)
    else:
        print(raw_output)

The LLM will make another call

[{'id': '2ffc3de4', 'function': {'name': 'addition', 'arguments': '{"a": "10", "b": "2"}'}, 'type': 'function'}]

Training Hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 12
  • total_train_batch_size: 24
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • num_epochs: 1.0

Framework Versions

  • Transformers 4.41.2
  • Pytorch 2.3.1+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1

Limitations and Bias

While the model performs well on a wide range of tasks, it may still produce biased or incorrect outputs. Users should exercise caution and critical judgment when using the model in sensitive or high-stakes applications. The model's outputs are influenced by the data it was trained on, which may contain inherent biases.

Ethical Considerations

Users should ensure that the deployment of this model adheres to ethical guidelines and consider the potential societal impact of the generated text. Misuse of the model for generating harmful or misleading content is strongly discouraged.

Acknowledgements

We would like to thank Mistral for the model and LLaMA-Factory for training utilities.

Contact Information

For questions or comments about the model, please reach out to the rubra team.

Citation

If you use this work, please cite it as:

@misc {rubra_ai_2024,
    author       = { Sanjay Nadhavajhala and Yingbei Tong },
    title        = { Mistral-7B-Instruct-v0.2 },
    year         = 2024,
    url          = { https://huggingface.co/rubra-ai/Mistral-7B-Instruct-v0.2 },
    doi          = { 10.57967/hf/2641 },
    publisher    = { Hugging Face }
}