Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,265 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
- fr
|
6 |
+
- de
|
7 |
+
- es
|
8 |
+
- it
|
9 |
+
- pt
|
10 |
+
- ru
|
11 |
+
- zh
|
12 |
+
- ja
|
13 |
+
tags:
|
14 |
+
- fp8
|
15 |
+
- vllm
|
16 |
+
---
|
17 |
+
|
18 |
+
# Model Card for Mistral-Nemo-Instruct-2407 quantized to FP8 weights, activations, and kv cache
|
19 |
+
|
20 |
+
This model has been compressed to FP8 weights with static per-tensor scales for activations and kv cache for usage in vLLM.
|
21 |
+
|
22 |
+
Usage in vLLM:
|
23 |
+
```python
|
24 |
+
from vllm import LLM
|
25 |
+
|
26 |
+
model = LLM("mgoin/Mistral-Nemo-Instruct-2407-FP8-KV", kv_cache_dtype="fp8", max_model_len=4096)
|
27 |
+
print(model.generate("Hello!"))
|
28 |
+
```
|
29 |
+
|
30 |
+
Script for quantization:
|
31 |
+
```python
|
32 |
+
from datasets import load_dataset
|
33 |
+
from transformers import AutoTokenizer
|
34 |
+
from auto_fp8 import AutoFP8ForCausalLM, BaseQuantizeConfig
|
35 |
+
|
36 |
+
pretrained_model_dir = "mistralai/Mistral-Nemo-Instruct-2407"
|
37 |
+
quantized_model_dir = "Mistral-Nemo-Instruct-2407-FP8-KV"
|
38 |
+
|
39 |
+
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_dir, use_fast=False)
|
40 |
+
tokenizer.pad_token = tokenizer.eos_token
|
41 |
+
|
42 |
+
# Load and tokenize all dataset samples for calibration of activation scales
|
43 |
+
ds = load_dataset("mgoin/ultrachat_2k", split="train_sft")
|
44 |
+
examples = [tokenizer.apply_chat_template(batch["messages"], tokenize=False) for batch in ds]
|
45 |
+
examples = tokenizer(examples, padding=True, truncation=True, return_tensors="pt", max_length=4096).to("cuda")
|
46 |
+
print(examples)
|
47 |
+
|
48 |
+
# Define quantization config with static activation scales
|
49 |
+
quantize_config = BaseQuantizeConfig(
|
50 |
+
quant_method="fp8",
|
51 |
+
activation_scheme="static",
|
52 |
+
ignore_patterns=["re:.*lm_head"],
|
53 |
+
kv_cache_quant_targets=("k_proj", "v_proj"),
|
54 |
+
)
|
55 |
+
|
56 |
+
# Load the model, quantize, and save checkpoint
|
57 |
+
model = AutoFP8ForCausalLM.from_pretrained(pretrained_model_dir, quantize_config)
|
58 |
+
model.quantize(examples)
|
59 |
+
model.save_quantized(quantized_model_dir)
|
60 |
+
```
|
61 |
+
|
62 |
+
The Mistral-Nemo-Instruct-2407 Large Language Model (LLM) is an instruct fine-tuned version of the [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407). Trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.
|
63 |
+
|
64 |
+
For more details about this model please refer to our release [blog post](https://mistral.ai/news/mistral-nemo/).
|
65 |
+
|
66 |
+
## Key features
|
67 |
+
- Released under the **Apache 2 License**
|
68 |
+
- Pre-trained and instructed versions
|
69 |
+
- Trained with a **128k context window**
|
70 |
+
- Trained on a large proportion of **multilingual and code data**
|
71 |
+
- Drop-in replacement of Mistral 7B
|
72 |
+
|
73 |
+
## Model Architecture
|
74 |
+
Mistral Nemo is a transformer model, with the following architecture choices:
|
75 |
+
- **Layers:** 40
|
76 |
+
- **Dim:** 5,120
|
77 |
+
- **Head dim:** 128
|
78 |
+
- **Hidden dim:** 14,436
|
79 |
+
- **Activation Function:** SwiGLU
|
80 |
+
- **Number of heads:** 32
|
81 |
+
- **Number of kv-heads:** 8 (GQA)
|
82 |
+
- **Vocabulary size:** 2**17 ~= 128k
|
83 |
+
- **Rotary embeddings (theta = 1M)**
|
84 |
+
|
85 |
+
## Metrics
|
86 |
+
|
87 |
+
### Main Benchmarks
|
88 |
+
|
89 |
+
| Benchmark | Score |
|
90 |
+
| --- | --- |
|
91 |
+
| HellaSwag (0-shot) | 83.5% |
|
92 |
+
| Winogrande (0-shot) | 76.8% |
|
93 |
+
| OpenBookQA (0-shot) | 60.6% |
|
94 |
+
| CommonSenseQA (0-shot) | 70.4% |
|
95 |
+
| TruthfulQA (0-shot) | 50.3% |
|
96 |
+
| MMLU (5-shot) | 68.0% |
|
97 |
+
| TriviaQA (5-shot) | 73.8% |
|
98 |
+
| NaturalQuestions (5-shot) | 31.2% |
|
99 |
+
|
100 |
+
### Multilingual Benchmarks (MMLU)
|
101 |
+
|
102 |
+
| Language | Score |
|
103 |
+
| --- | --- |
|
104 |
+
| French | 62.3% |
|
105 |
+
| German | 62.7% |
|
106 |
+
| Spanish | 64.6% |
|
107 |
+
| Italian | 61.3% |
|
108 |
+
| Portuguese | 63.3% |
|
109 |
+
| Russian | 59.2% |
|
110 |
+
| Chinese | 59.0% |
|
111 |
+
| Japanese | 59.0% |
|
112 |
+
|
113 |
+
## Usage
|
114 |
+
|
115 |
+
The model can be used with three different frameworks
|
116 |
+
|
117 |
+
- [`mistral_inference`](https://github.com/mistralai/mistral-inference): See [here](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407#mistral-inference)
|
118 |
+
- [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
|
119 |
+
- [`NeMo`](https://github.com/NVIDIA/NeMo): See [nvidia/Mistral-NeMo-12B-Instruct](https://huggingface.co/nvidia/Mistral-NeMo-12B-Instruct)
|
120 |
+
|
121 |
+
### Mistral Inference
|
122 |
+
|
123 |
+
#### Install
|
124 |
+
|
125 |
+
It is recommended to use `mistralai/Mistral-Nemo-Instruct-2407` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
|
126 |
+
|
127 |
+
```
|
128 |
+
pip install mistral_inference
|
129 |
+
```
|
130 |
+
|
131 |
+
#### Download
|
132 |
+
|
133 |
+
```py
|
134 |
+
from huggingface_hub import snapshot_download
|
135 |
+
from pathlib import Path
|
136 |
+
|
137 |
+
mistral_models_path = Path.home().joinpath('mistral_models', 'Nemo-Instruct')
|
138 |
+
mistral_models_path.mkdir(parents=True, exist_ok=True)
|
139 |
+
|
140 |
+
snapshot_download(repo_id="mistralai/Mistral-Nemo-Instruct-2407", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)
|
141 |
+
```
|
142 |
+
|
143 |
+
#### Chat
|
144 |
+
|
145 |
+
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
|
146 |
+
|
147 |
+
```
|
148 |
+
mistral-chat $HOME/mistral_models/Nemo-Instruct --instruct --max_tokens 256 --temperature 0.35
|
149 |
+
```
|
150 |
+
|
151 |
+
*E.g.* Try out something like:
|
152 |
+
```
|
153 |
+
How expensive would it be to ask a window cleaner to clean all windows in Paris. Make a reasonable guess in US Dollar.
|
154 |
+
```
|
155 |
+
|
156 |
+
#### Instruct following
|
157 |
+
|
158 |
+
```py
|
159 |
+
from mistral_inference.transformer import Transformer
|
160 |
+
from mistral_inference.generate import generate
|
161 |
+
|
162 |
+
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
|
163 |
+
from mistral_common.protocol.instruct.messages import UserMessage
|
164 |
+
from mistral_common.protocol.instruct.request import ChatCompletionRequest
|
165 |
+
|
166 |
+
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
|
167 |
+
model = Transformer.from_folder(mistral_models_path)
|
168 |
+
|
169 |
+
prompt = "How expensive would it be to ask a window cleaner to clean all windows in Paris. Make a reasonable guess in US Dollar."
|
170 |
+
|
171 |
+
completion_request = ChatCompletionRequest(messages=[UserMessage(content=prompt)])
|
172 |
+
|
173 |
+
tokens = tokenizer.encode_chat_completion(completion_request).tokens
|
174 |
+
|
175 |
+
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.35, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
|
176 |
+
result = tokenizer.decode(out_tokens[0])
|
177 |
+
|
178 |
+
print(result)
|
179 |
+
```
|
180 |
+
|
181 |
+
#### Function calling
|
182 |
+
|
183 |
+
```py
|
184 |
+
from mistral_common.protocol.instruct.tool_calls import Function, Tool
|
185 |
+
from mistral_inference.transformer import Transformer
|
186 |
+
from mistral_inference.generate import generate
|
187 |
+
|
188 |
+
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
|
189 |
+
from mistral_common.protocol.instruct.messages import UserMessage
|
190 |
+
from mistral_common.protocol.instruct.request import ChatCompletionRequest
|
191 |
+
|
192 |
+
|
193 |
+
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
|
194 |
+
model = Transformer.from_folder(mistral_models_path)
|
195 |
+
|
196 |
+
completion_request = ChatCompletionRequest(
|
197 |
+
tools=[
|
198 |
+
Tool(
|
199 |
+
function=Function(
|
200 |
+
name="get_current_weather",
|
201 |
+
description="Get the current weather",
|
202 |
+
parameters={
|
203 |
+
"type": "object",
|
204 |
+
"properties": {
|
205 |
+
"location": {
|
206 |
+
"type": "string",
|
207 |
+
"description": "The city and state, e.g. San Francisco, CA",
|
208 |
+
},
|
209 |
+
"format": {
|
210 |
+
"type": "string",
|
211 |
+
"enum": ["celsius", "fahrenheit"],
|
212 |
+
"description": "The temperature unit to use. Infer this from the users location.",
|
213 |
+
},
|
214 |
+
},
|
215 |
+
"required": ["location", "format"],
|
216 |
+
},
|
217 |
+
)
|
218 |
+
)
|
219 |
+
],
|
220 |
+
messages=[
|
221 |
+
UserMessage(content="What's the weather like today in Paris?"),
|
222 |
+
],
|
223 |
+
)
|
224 |
+
|
225 |
+
tokens = tokenizer.encode_chat_completion(completion_request).tokens
|
226 |
+
|
227 |
+
out_tokens, _ = generate([tokens], model, max_tokens=256, temperature=0.35, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
|
228 |
+
result = tokenizer.decode(out_tokens[0])
|
229 |
+
|
230 |
+
print(result)
|
231 |
+
```
|
232 |
+
|
233 |
+
### Transformers
|
234 |
+
|
235 |
+
> [!IMPORTANT]
|
236 |
+
> NOTE: Until a new release has been made, you need to install transformers from source:
|
237 |
+
> ```sh
|
238 |
+
> pip install git+https://github.com/huggingface/transformers.git
|
239 |
+
> ```
|
240 |
+
|
241 |
+
If you want to use Hugging Face `transformers` to generate text, you can do something like this.
|
242 |
+
|
243 |
+
```py
|
244 |
+
from transformers import pipeline
|
245 |
+
|
246 |
+
messages = [
|
247 |
+
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
|
248 |
+
{"role": "user", "content": "Who are you?"},
|
249 |
+
]
|
250 |
+
chatbot = pipeline("text-generation", model="mistralai/Mistral-Nemo-Instruct-2407")
|
251 |
+
chatbot(messages)
|
252 |
+
```
|
253 |
+
|
254 |
+
> [!TIP]
|
255 |
+
> Unlike previous Mistral models, Mistral Nemo requires smaller temperatures. We recommend to use a temperature of 0.3.
|
256 |
+
|
257 |
+
## Limitations
|
258 |
+
|
259 |
+
The Mistral Nemo Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
|
260 |
+
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
|
261 |
+
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
|
262 |
+
|
263 |
+
## The Mistral AI Team
|
264 |
+
|
265 |
+
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Alok Kothari, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Bam4d, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Carole Rambaud, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gaspard Blanchet, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Hichem Sattouf, Ian Mack, Jean-Malo Delignon, Jessica Chudnovsky, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickaël Seznec, Nicolas Schuhl, Niklas Muhs, Olivier de Garrigues, Patrick von Platen, Paul Jacob, Pauline Buche, Pavan Kumar Reddy, Perry Savas, Pierre Stock, Romain Sauvestre, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibault Schueller, Thibaut Lavril, Thomas Wang, Théophile Gervet, Timothée Lacroix, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall
|