--- base_model: - Sao10K/L3-70B-Euryale-v2.1 - NousResearch/Hermes-2-Theta-Llama-3-70B library_name: transformers tags: - mergekit - merge --- # Hermes-2-Theta-L3-Euryale-Ties-0.8-70B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). I wanted to have enhanced roleplay performance and merged two of my favorite models together. I maybe release some merges like this with different methods and biases. This config seemed to be the best mix of both world on first glance. I will update my experiences further in the future. ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Hermes-2-Theta-Llama-3-70B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B) as a base. ### Models Merged The following models were included in the merge: * [Sao10K/L3-70B-Euryale-v2.1](https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: NousResearch/Hermes-2-Theta-Llama-3-70B parameters: weight: 0.8 - model: Sao10K/L3-70B-Euryale-v2.1 parameters: weight: 0.2 merge_method: ties base_model: NousResearch/Hermes-2-Theta-Llama-3-70B parameters: density: 0.7 normalize: true dtype: float16 tokenizer_source: base ```