File size: 1,456 Bytes
67215a5
 
 
 
 
 
 
 
 
1b1773b
67215a5
 
 
 
 
 
b4e2311
 
67215a5
 
 
 
 
 
 
 
 
 
 
 
b4e2311
 
 
 
 
 
1b1773b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
base_model: Epiculous/Azure_Dusk-v0.2
license: apache-2.0
inference: false
tags:
- mistral
- nemo
- roleplay
- sillytavern
- gguf
---

**Model name:** <br>
Azure_Dusk-v0.2

**Description:** <br>
"Following up on Crimson_Dawn-v0.2 we have Azure_Dusk-v0.2! Training on Mistral-Nemo-Base-2407 this time I've added significantly more data, as well as trained using RSLoRA as opposed to regular LoRA. Another key change is training on ChatML as opposed to Mistral Formatting." <br>
– by Author. <br>

As described, use the ChatML prompt format. <br>

> [!TIP]
> **Presets:** <br>
> You can use ChatML presets within SillyTavern and adjust from there. <br>
> Alternatively, check out [Virt-io's ChatML v1.9 presets here](https://huggingface.co/Virt-io/SillyTavern-Presets/tree/main/Prompts/ChatML/v1.9), make sure you read the [repository page for how to use them properly](https://huggingface.co/Virt-io/SillyTavern-Presets/).

> [!NOTE]
> Original model page: <br>
> https://huggingface.co/Epiculous/Azure_Dusk-v0.2
>
> Quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp)-[b3733](https://github.com/ggerganov/llama.cpp/releases/tag/b3733): <br>
> ```
> 1. Base⇢ Convert-GGUF(FP16)⇢ Generate-Imatrix-Data(FP16)
> 2. Base⇢ Convert-GGUF(BF16)⇢ Use-Imatrix-Data(FP16)⇢ Quantize-GGUF(Imatrix-Quants)
> ```
> 
![model-image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/n3-g_YTk3FY-DBzxXd28E.png)