--- base_model: - rombodawg/Rombos-Coder-V2.5-Qwen-7b - huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated - TechxGenus/CursorCore-QW2.5-7B - MadeAgents/Hammer2.0-7b library_name: transformers tags: - mergekit - merge --- # final_model This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated) as a base. ### Models Merged The following models were included in the merge: * [rombodawg/Rombos-Coder-V2.5-Qwen-7b](https://huggingface.co/rombodawg/Rombos-Coder-V2.5-Qwen-7b) * [TechxGenus/CursorCore-QW2.5-7B](https://huggingface.co/TechxGenus/CursorCore-QW2.5-7B) * [MadeAgents/Hammer2.0-7b](https://huggingface.co/MadeAgents/Hammer2.0-7b) ### Configuration The following YAML configuration was used to produce this model: ```yaml # Qwen2.5-Coder-7B-Instruct-abliterated-TIES-v2.0 models: - model: TechxGenus/CursorCore-QW2.5-7B # Assist programming through aligning anything parameters: density: 1.0 weight: 1.0 - model: MadeAgents/Hammer2.0-7b # Function masking techniques parameters: density: 1.0 weight: 1.0 - model: rombodawg/Rombos-Coder-V2.5-Qwen-7b # Self-instruct Fine-tuning parameters: density: 1.0 weight: 1.0 merge_method: ties base_model: huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated # Abliterated model as base parameters: normalize: true int8_mask: false dtype: bfloat16 tokenizer_source: union ```