sanjay920 commited on
Commit
c7e6ac5
1 Parent(s): cda6454

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -13
README.md CHANGED
@@ -1,19 +1,34 @@
1
- ---
2
- language:
3
- - en
4
- ---
5
- ---
6
- ---
7
 
8
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
9
- should probably proofread and complete it, then remove this comment. -->
10
 
11
- # run1_short_20000
 
12
 
13
- This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the rubra_train_v1_mistral_short dataset.
14
 
 
15
 
16
- ### Training hyperparameters
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
  The following hyperparameters were used during training:
19
  - learning_rate: 2e-05
@@ -26,9 +41,37 @@ The following hyperparameters were used during training:
26
  - lr_scheduler_type: cosine
27
  - num_epochs: 1.0
28
 
29
- ### Framework versions
30
 
31
  - Transformers 4.41.2
32
  - Pytorch 2.3.1+cu121
33
  - Datasets 2.19.2
34
- - Tokenizers 0.19.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
 
2
+ # Rubra Mistral-7B-Instruct-v0.2
 
3
 
4
+ ## Model description
5
+ The model is the result of further post-training [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). It is capable of complex tool/function calling.
6
 
7
+ ## Training Data
8
 
9
+ The model was post-trained (freeze tuned & DPO) on a proprietary dataset consisting of diverse function calling, chat, and instruct data.
10
 
11
+ ## Evaluation
12
+
13
+ | Model | Function Calling | MMLU | GPQA | GSM-8K | MATH | MT-bench |
14
+ |------------------------------------|------------------|-------|-------|--------|-------|----------|
15
+ | Mistral 7B Instruct v0.2 | - | 59.27 | 27.68 | 43.21 | 10.30 | 7.50 |
16
+ | Rubra Enhanced Mistral 7B Instruct v0.2 | 69.28% | 58.90 | 29.91 | 34.12 | 8.36 | 7.36 |
17
+
18
+
19
+ ## How to use
20
+
21
+ You can use the model with the Hugging Face `transformers` and the rubra library [rubra-tools](https://github.com/rubra-ai/rubra-tools) as follows:
22
+
23
+ ```
24
+ pip install rubra_tools torch==2.3.0 transformers
25
+ ```
26
+
27
+ ```python
28
+ TODO
29
+ ```
30
+
31
+ ## Training Hyperparameters
32
 
33
  The following hyperparameters were used during training:
34
  - learning_rate: 2e-05
 
41
  - lr_scheduler_type: cosine
42
  - num_epochs: 1.0
43
 
44
+ ## Framework Versions
45
 
46
  - Transformers 4.41.2
47
  - Pytorch 2.3.1+cu121
48
  - Datasets 2.19.2
49
+ - Tokenizers 0.19.1
50
+
51
+ ## Limitations and Bias
52
+
53
+ While the model performs well on a wide range of tasks, it may still produce biased or incorrect outputs. Users should exercise caution and critical judgment when using the model in sensitive or high-stakes applications. The model's outputs are influenced by the data it was trained on, which may contain inherent biases.
54
+
55
+ ## Ethical Considerations
56
+
57
+ Users should ensure that the deployment of this model adheres to ethical guidelines and consider the potential societal impact of the generated text. Misuse of the model for generating harmful or misleading content is strongly discouraged.
58
+
59
+ ## Acknowledgements
60
+
61
+ We would like to thank Mistral for the model and LLaMA-Factory for training utilities.
62
+
63
+ ## Contact Information
64
+
65
+ For questions or comments about the model, please reach out to [the rubra team](mailto:rubra@acorn.io).
66
+
67
+ ## Citation
68
+
69
+ If you use this work, please cite it as:
70
+
71
+ @misc{rubra202406mistral7binstructv02,
72
+ title = {Rubra-Mistral-7B-Instruct-v0.2},
73
+ author = {Sanjay Nadhavajhala and Yingbei Tong},
74
+ year = {2024},
75
+ publisher = {Hugging Face},
76
+ howpublished = {\url{https://huggingface.co/rubra-ai/Mistral-7B-Instruct-v0.2}},
77
+ }