--- base_model: - abhishekchohan/Yi-9B-Forest-DPO-v1.0 - nbeerbower/yi-wissenschaft-9B - nbeerbower/yi-gutenberg-9B - qnguyen3/Master-Yi-9B - wenbopan/Faro-Yi-9B-DPO - nbeerbower/HolyYi-9B - nbeerbower/yi-prude-9B - wenbopan/Faro-Yi-9B - cognitivecomputations/dolphin-2.9.1-yi-1.5-9b - Azure99/blossom-v5-9b library_name: transformers tags: - mergekit - merge --- # Yiet-9B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [nbeerbower/yi-gutenberg-9B](https://huggingface.co/nbeerbower/yi-gutenberg-9B) as a base. ### Models Merged The following models were included in the merge: * [abhishekchohan/Yi-9B-Forest-DPO-v1.0](https://huggingface.co/abhishekchohan/Yi-9B-Forest-DPO-v1.0) * [nbeerbower/yi-wissenschaft-9B](https://huggingface.co/nbeerbower/yi-wissenschaft-9B) * [qnguyen3/Master-Yi-9B](https://huggingface.co/qnguyen3/Master-Yi-9B) * [wenbopan/Faro-Yi-9B-DPO](https://huggingface.co/wenbopan/Faro-Yi-9B-DPO) * [nbeerbower/HolyYi-9B](https://huggingface.co/nbeerbower/HolyYi-9B) * [nbeerbower/yi-prude-9B](https://huggingface.co/nbeerbower/yi-prude-9B) * [wenbopan/Faro-Yi-9B](https://huggingface.co/wenbopan/Faro-Yi-9B) * [cognitivecomputations/dolphin-2.9.1-yi-1.5-9b](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-yi-1.5-9b) * [Azure99/blossom-v5-9b](https://huggingface.co/Azure99/blossom-v5-9b) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: nbeerbower/HolyYi-9B - model: qnguyen3/Master-Yi-9B - model: abhishekchohan/Yi-9B-Forest-DPO-v1.0 - model: cognitivecomputations/dolphin-2.9.1-yi-1.5-9b - model: wenbopan/Faro-Yi-9B-DPO - model: Azure99/blossom-v5-9b - model: nbeerbower/yi-prude-9B - model: wenbopan/Faro-Yi-9B - model: nbeerbower/yi-wissenschaft-9B merge_method: model_stock base_model: nbeerbower/yi-gutenberg-9B dtype: bfloat16 ```