File size: 1,955 Bytes
c44ab1e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
language:
- en
license: apache-2.0
tags:
- chemistry
- biology
- not-for-all-audiences
- merge
- code
- llama-cpp
- gguf-my-repo
base_model: Locutusque/TinyMistral-248M-v2.5
datasets:
- Locutusque/hercules-v1.0
- Open-Orca/SlimOrca-Dedup
inference:
  parameters:
    do_sample: true
    renormalize_logits: false
    temperature: 0.8
    top_p: 0.14
    top_k: 12
    min_new_tokens: 2
    max_new_tokens: 96
    repetition_penalty: 1.15
    no_repeat_ngram_size: 5
    epsilon_cutoff: 0.002
widget:
- text: '<|im_start|>user

    Write me a Python program that calculates the factorial of n. <|im_end|>

    <|im_start|>assistant

    '
---

# BasedBots/TinyMistral-248M-v2.5-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`Locutusque/TinyMistral-248M-v2.5-Instruct`](https://huggingface.co/Locutusque/TinyMistral-248M-v2.5-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Locutusque/TinyMistral-248M-v2.5-Instruct) for more details on the model.
## Use with llama.cpp

Install llama.cpp through brew.

```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.

CLI:

```bash
llama-cli --hf-repo BasedBots/TinyMistral-248M-v2.5-Instruct-Q4_K_M-GGUF --model tinymistral-248m-v2.5-instruct.Q4_K_M.gguf -p "The meaning to life and the universe is"
```

Server:

```bash
llama-server --hf-repo BasedBots/TinyMistral-248M-v2.5-Instruct-Q4_K_M-GGUF --model tinymistral-248m-v2.5-instruct.Q4_K_M.gguf -c 2048
```

Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.

```
git clone https://github.com/ggerganov/llama.cpp &&             cd llama.cpp &&             make &&             ./main -m tinymistral-248m-v2.5-instruct.Q4_K_M.gguf -n 128
```