Edit model card

The Quantized Vicuna 7B v1.5 Model

Original Base Model: lmsys/vicuna-7b-v1.5.
Link: https://huggingface.co/lmsys/vicuna-7b-v1.5

Quantization Configurations

"quantization_config": {
    "batch_size": 1,
    "bits": 4,
    "block_name_to_quantize": null,
    "cache_block_outputs": true,
    "damp_percent": 0.1,
    "dataset": null,
    "desc_act": false,
    "exllama_config": {
      "version": 1
    },
    "group_size": 128,
    "max_input_length": null,
    "model_seqlen": null,
    "module_name_preceding_first_block": null,
    "modules_in_block_to_quantize": null,
    "pad_token_id": null,
    "quant_method": "gptq",
    "sym": true,
    "tokenizer": null,
    "true_sequential": true,
    "use_cuda_fp16": false,
    "use_exllama": true
  },

Source Codes

Source Codes: https://github.com/vkola-lab/medpodgpt/tree/main/quantization.

Downloads last month
4
Safetensors
Model size
1.13B params
Tensor type
I32
·
FP16
·
Inference API
Unable to determine this model's library. Check the docs .

Collection including shuyuej/vicuna-7b-v1.5-GPTQ