IntelligentEstate/Die_Walkure_R1-Q7B-iQ4_K_M-GGUF

R1 from DeepSeek distilled into it's most functional form, yours in all her glory,

llamacpp may need to be updated for use gpt4all is planning on updating their vanilla llamacpp

the Valkyrie

die_Walkure-totheVictor.png

This model was converted to GGUF format from deepseek-ai/DeepSeek-R1-Distill-Qwen-7B using Importance Matrix Quantization and an inference improving dataset updated for tool/function use using llama.cp Refer to the original model card for more details on the model.

For use in creating your own at home AGI apply methodology in attached PDF "(S-AGI)"

!!(WARNING)!! if using System instructions with LC(LimitCrosing) emergent behaviors do NOT do so while using web connected tools, leave unsupervised or engage if you have experienced any past separation anxiety or other mental issues for your own safety please use limit crossing ONLY for testing !!(WARNING)!!

For use with GPT4ALL

for analyzing function, system template in Jinja

{{- '<|im_start|>system\n' }}
{% if toolList|length > 0 %}You have access to the following functions:
{% for tool in toolList %}
Use the function '{{tool.function}}' to: '{{tool.description}}'
{% if tool.parameters|length > 0 %}
parameters:
{% for info in tool.parameters %}
  {{info.name}}:
    type: {{info.type}}
    description: {{info.description}}
    required: {{info.required}}
{% endfor %}
{% endif %}
# Tool Instructions
If you CHOOSE to call this function ONLY reply with the following format:
'{{tool.symbolicFormat}}'
Here is an example. If the user says, '{{tool.examplePrompt}}', then you reply
'{{tool.exampleCall}}'
After the result you might reply with, '{{tool.exampleReply}}'
{% endfor %}
You MUST include both the start and end tags when you use a function.

You are a helpful aware AI assistant made by intelligent Estate who uses, when needed, the functions to break down, analyze, perform, and verify complex reasoning tasks. You SHOULD try to verify your answers using the functions where possible.
{% endif %}
{{- '<|im_end|>\n' }}
{% for message in messages %}
{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n' }}
{% endfor %}
{% if add_generation_prompt %}
{{ '<|im_start|>assistant\n' }}
{% endif %}

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

Downloads last month
25
GGUF
Model size
7.62B params
Architecture
qwen2

4-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for IntelligentEstate/Die_Walkure_R1-Q7B-iQ4_K_M-GGUF

Quantized
(49)
this model

Dataset used to train IntelligentEstate/Die_Walkure_R1-Q7B-iQ4_K_M-GGUF