File size: 3,003 Bytes
48cd1f3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75a26ab
 
48cd1f3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---
base_model: teknium/OpenHermes-2.5-Mistral-7B
language:
- en
license: apache-2.0
model-index:
- name: wasm-OpenHermes-2.5-Mistral-7B-q4f32_1
  results: []
model_creator: Hugging Face H4
model_name: WASM OpenHermes 2.5 Mistral 7B
model_type: mistral
prompt_template: '<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant'
---

# OpenHermes 2.5 (Finetune of Mistral 7B) compiled for WebGPU - q4f32_1

- Original model: [OpenHermes 2.5 - Mistral 7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
- creator: [teknium](https://twitter.com/Teknium1): [support his work](https://github.com/sponsors/teknium1)
- compiled by: Hrishi Olickel: [say hi on Twitter!](https://twitter.com/hrishioa)

<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/6469c972a5dd10c9a49d683b/EdXurA0Bzt-RZGmBY6t3B.mp4"></video>

## Description

This is a quantized version of OpenHermes 2.5, a recent finetune of [Mistral-7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) ready to be used for on-browser inference over WebGPU. The model showed good performance in my testing, and [shows promise for actions and RP as well](https://www.reddit.com/r/LocalLLaMA/comments/17p0gut/llm_comparisontest_mistral_7b_updates_openhermes/).

From Teknium:

```
OpenHermes 2.5 Mistral 7B is a state of the art Mistral Fine-tune, a continuation of OpenHermes 2 model, which trained on additional code datasets.

Potentially the most interesting finding from training on a good ratio (est. of around 7-14% of the total dataset) of code instruction was that it has boosted several non-code benchmarks, including TruthfulQA, AGIEval, and GPT4All suite. It did however reduce BigBench benchmark score, but the net gain overall is significant.

The code it trained on also improved it's humaneval score (benchmarking done by Glaive team) from **43% @ Pass 1** with Open Herms 2 to **50.7% @ Pass 1** with Open Hermes 2.5.

OpenHermes was trained on 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape. [More details soon]

Filtering was extensive of these public datasets, as well as conversion of all formats to ShareGPT, which was then further transformed by axolotl to use ChatML.
```

Another finetune, Dolphin 2.2.1 is [also available here](hrishioa/mlc-chat-dolphin-2.2.1-mistral-7b-q4f32_1, compiled for WebGPU.

Compiled with [mlc-llm](https://llm.mlc.ai/).

Very helpful direction provided by [felladrin](https://github.com/felladrin)!

You can use [his example](https://huggingface.co/spaces/Felladrin/Web-LLM-Mistral-7B-OpenOrca) to get quickly started with this model.

## Prompt template

Prompt format:
This model uses [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format.

```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

```