license: apache-2.0 | |
tags: | |
- conversation | |
- merge | |
base_model: | |
- Mistral-7b | |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64bb1109aaccfd28b023bcec/wH2mR4mD8r1Z3rTH8jgK1.png) | |
### Design | |
The design intention is to create a pseudo-philosophical, pseudo-spiritual, pseudo counseling chatbob model for sounding ideas off. Like a mirror really. This obviously does not constitute medical advice, and if you are in need seek professional help. The name Apocrypha-7B comes from the fact that it's fake - this isn't a guide, friend or a guru. It's at best, if the model works, a sounding board. But I think such things might still be helpful for organising ones own thoughts. This model should still be able to role-play, but will likely play better as a 'helper' role of some sort given the counseling and theory of mind data if you do use it for role-play. | |
This mistral 7b model is a task arithmetic merge of Epiculous/Fett-uccine-7B (theory of mind and gnosis datasets), GRMenon/mental-mistral-7b-instruct-autotrain (mental health counseling conversations dataset), and teknium/Hermes-Trismegistus-Mistral-7B (open-hermes + occult datasets) | |
I will throw a GGUF or two inside a subfolder here. | |
### Configuration | |
The following YAML configuration was used to produce this model: | |
```yaml | |
models: | |
- model: ./Hermes-Trismegistus-7B | |
parameters: | |
weight: 0.35 | |
- model: ./mental-mistral-7b | |
parameters: | |
weight: 0.39 | |
- model: ./Fett-uccine-7B | |
parameters: | |
weight: 0.45 | |
merge_method: task_arithmetic | |
base_model: ./Mistral-7B-v0.1 | |
dtype: bfloat16 | |
``` | |
Resources used: | |
https://huggingface.co/teknium/Hermes-Trismegistus-Mistral-7B | |
https://huggingface.co/GRMenon/mental-mistral-7b-instruct-autotrain | |
https://huggingface.co/Epiculous/Fett-uccine-7B/tree/main | |
https://github.com/cg123/mergekit/tree/main |