Update README.md
Browse files
README.md
CHANGED
@@ -11,4 +11,15 @@ base_model: nvidia/Minitron-4B-Base
|
|
11 |
|
12 |
# Minitron-4B-Base-FP8
|
13 |
|
14 |
-
FP8 quantized checkpoint of [nvidia/Minitron-4B-Base](https://huggingface.co/nvidia/Minitron-4B-Base) for use with vLLM.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
# Minitron-4B-Base-FP8
|
13 |
|
14 |
+
FP8 quantized checkpoint of [nvidia/Minitron-4B-Base](https://huggingface.co/nvidia/Minitron-4B-Base) for use with vLLM.
|
15 |
+
|
16 |
+
|
17 |
+
```
|
18 |
+
lm_eval --model vllm --model_args pretrained=mgoin/Minitron-4B-Base-FP8 --tasks gsm8k --num_fewshot 5 --batch_size auto
|
19 |
+
|
20 |
+
vllm (pretrained=mgoin/Minitron-4B-Base-FP8), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto
|
21 |
+
|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|
|
22 |
+
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|
23 |
+
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.2305|± |0.0116|
|
24 |
+
| | |strict-match | 5|exact_match|↑ |0.2282|± |0.0116|
|
25 |
+
```
|