--- base_model: - VAGOsolutions/SauerkrautLM-gemma-2-9b-it - silma-ai/SILMA-9B-Instruct-v1.0 - AXCXEPT/EZO-Humanities-9B-gemma-2-it - princeton-nlp/gemma-2-9b-it-SimPO - anthracite-org/magnum-v3-9b-customgemma2 library_name: transformers pipeline_tag: text-generation tags: - mergekit - merge license: gemma --- # Gigantes-v1-gemma2-9b-it This repo contains a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). It was hoped that the contributions of Japanese, German, and Arabic instruct models would contribute to reasoning as well as increasing the complexity of English text generation. ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO) as a base. ### Models Merged The following models were included in the merge: * [VAGOsolutions/SauerkrautLM-gemma-2-9b-it](https://huggingface.co/VAGOsolutions/SauerkrautLM-gemma-2-9b-it) * [silma-ai/SILMA-9B-Instruct-v1.0](https://huggingface.co/silma-ai/SILMA-9B-Instruct-v1.0) * [AXCXEPT/EZO-Humanities-9B-gemma-2-it](https://huggingface.co/AXCXEPT/EZO-Humanities-9B-gemma-2-it) * [anthracite-org/magnum-v3-9b-customgemma2](https://huggingface.co/anthracite-org/magnum-v3-9b-customgemma2) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: princeton-nlp/gemma-2-9b-it-SimPO dtype: bfloat16 merge_method: task_arithmetic parameters: normalize: true models: - model: princeton-nlp/gemma-2-9b-it-SimPO - model: AXCXEPT/EZO-Humanities-9B-gemma-2-it parameters: weight: 0.3 - model: VAGOsolutions/SauerkrautLM-gemma-2-9b-it parameters: weight: 0.1 - model: anthracite-org/magnum-v3-9b-customgemma2 parameters: weight: 0.01 - model: silma-ai/SILMA-9B-Instruct-v1.0 parameters: weight: 0.001 ```