--- base_model: - rasyosef/Llama-3.1-Minitron-4B-Chat - anthracite-org/magnum-v2-4b - Delta-Vector/Holland-4B-V1 - nvidia/Llama-3.1-Minitron-4B-Width-Base - Magpie-Align/MagpieLM-4B-Chat-v0.1 - bunnycore/LLama-3.1-4B-TitanFusion library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [nvidia/Llama-3.1-Minitron-4B-Width-Base](https://huggingface.co/nvidia/Llama-3.1-Minitron-4B-Width-Base) as a base. ### Models Merged The following models were included in the merge: * [rasyosef/Llama-3.1-Minitron-4B-Chat](https://huggingface.co/rasyosef/Llama-3.1-Minitron-4B-Chat) * [anthracite-org/magnum-v2-4b](https://huggingface.co/anthracite-org/magnum-v2-4b) * [Delta-Vector/Holland-4B-V1](https://huggingface.co/Delta-Vector/Holland-4B-V1) * [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1) * [bunnycore/LLama-3.1-4B-TitanFusion](https://huggingface.co/bunnycore/LLama-3.1-4B-TitanFusion) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: anthracite-org/magnum-v2-4b parameters: weight: 1 density: 1 - model: Magpie-Align/MagpieLM-4B-Chat-v0.1 parameters: weight: 1 density: 1 - model: rasyosef/Llama-3.1-Minitron-4B-Chat parameters: weight: 1 density: 1 - model: bunnycore/LLama-3.1-4B-TitanFusion parameters: weight: 1 density: 1 - model: Delta-Vector/Holland-4B-V1 parameters: weight: 1 density: 1 merge_method: ties base_model: nvidia/Llama-3.1-Minitron-4B-Width-Base parameters: density: 1 normalize: true int8_mask: true dtype: bfloat16 ```