Edit model card

Qwen2-14B-merge

Qwen2-14B-merge is a merge of the following models using mergekit:

🧩 Configuration

```yaml dtype: float16 merge_method: passthrough slices:

  • sources:
    • layer_range: [0, 6] model: Qwen/Qwen2-7B-Instruct
  • sources:
    • layer_range: [3, 9] model: Qwen/Qwen2-7B-Instruct
  • sources:
    • layer_range: [6, 12] model: Qwen/Qwen2-7B-Instruct
  • sources:
    • layer_range: [9, 15] model: Qwen/Qwen2-7B-Instruct
  • sources:
    • layer_range: [12, 18] model: Qwen/Qwen2-7B-Instruct
  • sources:
    • layer_range: [15, 21] model: Qwen/Qwen2-7B-Instruct
  • sources:
    • layer_range: [18, 24] model: Qwen/Qwen2-7B-Instruct
  • sources:
    • layer_range: [21, 28] model: Qwen/Qwen2-7B-Instruct

```

Downloads last month
5
Safetensors
Model size
12.5B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for paperplanedeemo/Qwen2-14B-merge

Quantizations
1 model