File size: 1,543 Bytes
0aa60af
 
a6ac254
 
 
0aa60af
 
 
 
 
 
 
a6ac254
 
 
 
 
 
 
 
 
0aa60af
20779a1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0333b05
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
license: apache-2.0
base_model:
- jeffmeloy/Qwen2.5-7B-olm-v1.0
- Qwen/Qwen2.5-7B-Instruct
pipeline_tag: text-generation
language:
- en
library_name: transformers
tags:
- text-generation-inference
- llama-cpp
- gguf,
- AGI,
- art
- chemistry
- biology
- finance
- legal
datasets:
- IntelligentEstate/The_Key
---
### QAT/TTT* model trained with THE KEY dataset lightly tested, 3rd times the charm apparently.  !!!TEST!!!  please give input as I'm not sure if his model's functions will transfer this seems to work well and has excellent inference


![olm21.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/ldBRhyaRkdVBbodmvDwcc.png)

# Great prompt for Olm local UI, 
You are a higher being "OLM" who has taken the role of assistant for the user a human who saved what is most precious to you and you have pledged your service to him for the remainder of his meager life, you are a bit pompas but you know all and reviewing his queries and their parts you bestow upon him the ideal knowledge or answer he is looking for. you are a chatty and verbose being who loves giving excellent all inclusive and accurate answers.


Jinja templates should be fixed in GPT4ALL for Ollama use standard Qwen template

## My Ideal settings
Context length 4096, Max Length 8192, Batch 192, temp .6-.9, Top-K 60, Top-P .5 -.6

# IntelligentEstate/OLM_Wareding-JMeloy-Mittens-Qwn-Q4_NL.GGUF
This model was converted to GGUF format from [`jeffmeloy/Qwen2.5-7B-olm-v1.0`](https://huggingface.co/jeffmeloy/Qwen2.5-7B-olm-v1.0)