4-bit GPTQ quantized version of EVA-Qwen2.5-7B-v0.1 for inference with the Private LLM app.
Base model