File size: 3,262 Bytes
55dd0cf
 
21001c8
148cd68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55dd0cf
72914ef
c7e6ac5
72914ef
c7e6ac5
 
72914ef
c7e6ac5
72914ef
c7e6ac5
72914ef
c7e6ac5
 
 
 
 
 
 
 
 
 
 
 
 
72914ef
 
 
 
 
 
 
 
 
 
 
 
c7e6ac5
72914ef
 
 
 
c7e6ac5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c8a6652
c880cf1
16bbf1e
 
c880cf1
 
 
 
c8a6652
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
---
license: apache-2.0
model-index:
- name: Rubra-Mistral-7B-Instruct-v0.2
  results:
  - task:
      type: text-generation
    dataset:
      type: MMLU
      name: MMLU
    metrics:
    - type: 5-shot
      value: 58.9
      verified: false
  - task:
      type: text-generation
    dataset:
      type: GPQA
      name: GPQA
    metrics:
    - type: 0-shot
      value: 29.91
      verified: false
  - task:
      type: text-generation
    dataset:
      type: GSM-8K
      name: GSM-8K
    metrics:
    - type: 8-shot, CoT
      value: 34.12
      verified: false
  - task:
      type: text-generation
    dataset:
      type: MATH
      name: MATH
    metrics:
    - type: 4-shot, CoT
      value: 8.36
      verified: false
  - task:
      type: text-generation
    dataset:
      type: MT-bench
      name: MT-bench
    metrics:
    - type: GPT-4 as Judge
      value: 7.36
      verified: false
tags:
- function-calling
- tool-calling
- agentic
---

# Rubra Mistral-7B-Instruct-v0.2

## Model description
The model is the result of further post-training [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). It is capable of complex tool/function calling.

## Training Data

The model was post-trained (freeze tuned & DPO) on a proprietary dataset consisting of diverse function calling, chat, and instruct data.

## How to use

You can use the model with the Hugging Face `transformers` and the rubra library [rubra-tools](https://github.com/rubra-ai/rubra-tools) as follows:

```
pip install rubra_tools torch==2.3.0 transformers
```

```python
TODO
```

## Training Hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 12
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1.0

## Framework Versions

- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1

## Limitations and Bias

While the model performs well on a wide range of tasks, it may still produce biased or incorrect outputs. Users should exercise caution and critical judgment when using the model in sensitive or high-stakes applications. The model's outputs are influenced by the data it was trained on, which may contain inherent biases.

## Ethical Considerations

Users should ensure that the deployment of this model adheres to ethical guidelines and consider the potential societal impact of the generated text. Misuse of the model for generating harmful or misleading content is strongly discouraged.

## Acknowledgements

We would like to thank Mistral for the model and LLaMA-Factory for training utilities.

## Contact Information

For questions or comments about the model, please reach out to [the rubra team](mailto:rubra@acorn.io).

## Citation

If you use this work, please cite it as:

```
@misc {rubra_ai_2024,
	author       = { Sanjay Nadhavajhala and Yingbei Tong },
	title        = { Mistral-7B-Instruct-v0.2 },
	year         = 2024,
	url          = { https://huggingface.co/rubra-ai/Mistral-7B-Instruct-v0.2 },
	doi          = { 10.57967/hf/2641 },
	publisher    = { Hugging Face }
}
```