This model has been quantized using GPTQModel.

  • bits: 4
  • group_size: 128
  • desc_act: false
  • static_groups: false
  • sym: true
  • lm_head: false
  • damp_percent: 0.01
  • true_sequential: true
  • model_name_or_path:
  • model_file_base_name: model
  • quant_method: gptq
  • checkpoint_format: gptq
  • meta:
    • quantizer: gptqmodel:0.9.2
Downloads last month
49
Safetensors
Model size
2.95B params
Tensor type
I32
BF16
FP16
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.