Wanted to take a stab at this so I quanted it, but only the 6bpw and 4bpw quants.

This is the EXL2 4bpw version of this model. For the original model, go here
For the 6bpw version, go here

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

image/webp

The Blue Thunder

Merge Method

This model was merged using the TIES merge method using Sao10K/32B-Qwen2.5-Kunou-v1 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: nbeerbower/Qwen2.5-Gutenberg-Doppel-32B
    parameters:
      density: 0.75
      weight: 0.75
  - model: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
    parameters:
      density: 0.5
      weight: 0.5
  - model: Sao10K/32B-Qwen2.5-Kunou-v1
    parameters:
      density: 0.8
      weight: 1
  - model: Rombo-Org/Rombo-LLM-V2.5-Qwen-32b
    parameters:
      density: 0.5
      weight: 0.5

merge_method: ties
base_model: Sao10K/32B-Qwen2.5-Kunou-v1
parameters:
  normalize: false
  int8_mask: true
dtype: float16
Downloads last month
11
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Statuo/Qwen2.5-32b-Tatewaki-Kunou-EXL2-4bpw