The Quantized Ministral 8B Instruct 2410 Model
Original Base Model: mistralai/Ministral-8B-Instruct-2410
.
Link: https://huggingface.co/mistralai/Ministral-8B-Instruct-2410
Quantization Configurations
"quantization_config": {
"bits": 4,
"checkpoint_format": "gptq",
"damp_percent": 0.01,
"desc_act": true,
"group_size": 128,
"model_file_base_name": null,
"model_name_or_path": null,
"quant_method": "gptq",
"static_groups": false,
"sym": true,
"true_sequential": true
},
Source Codes
Source Codes: https://github.com/vkola-lab/medpodgpt/tree/main/quantization.
- Downloads last month
- 35