FuseChat: Knowledge Fusion of Chat Models
Paper
•
2408.07990
•
Published
•
14
This is a merge of pre-trained language models created using mergekit.
This model was merged using the SCE merge method using nbeerbower/Llama-3.1-Nemotron-lorablated-70B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: nbeerbower/llama3.1-kartoffeldes-70B
- model: huihui-ai/Llama-3.3-70B-Instruct-abliterated
- model: NaniDAO/Llama-3.3-70B-Instruct-ablated
- model: mlabonne/Llama-3.1-70B-Instruct-lorablated
- model: SicariusSicariiStuff/Negative_LLAMA_70B
merge_method: sce
base_model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
parameters:
select_topk: 0.15
out_dtype: bfloat16
tokenizer:
source: SicariusSicariiStuff/Negative_LLAMA_70B