merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the TIES merge method using Undi95/Meta-Llama-3-8B-hf as a base.
Models Merged
The following models were included in the merge:
- Sao10K/L3-8B-Stheno-v3.2
- Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
- jeiku/Chaos_RP_l3_8B
- Nitral-AI/Poppy_Porpoise-0.72-L3-8B
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
parameters:
density: 0.2
weight: 0.2
- model: jeiku/Chaos_RP_l3_8B
parameters:
density: 0.1
weight: 0.1
- model: Nitral-AI/Poppy_Porpoise-0.72-L3-8B
parameters:
density: 0.2
weight: 0.2
- model: Sao10K/L3-8B-Stheno-v3.2
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: Undi95/Meta-Llama-3-8B-hf
parameters:
normalize: false
int8_mask: true
dtype: float16
- Downloads last month
- 13
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.