Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
ChenMnZ
/
Llama-2-70b-EfficientQAT-w4g128-GPTQ
like
0
Text Generation
Transformers
Safetensors
llama
text-generation-inference
Inference Endpoints
4-bit precision
gptq
arxiv:
2407.11062
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
main
Llama-2-70b-EfficientQAT-w4g128-GPTQ
/
quantize_config.json
Commit History
Upload folder using huggingface_hub
22c8d5f
verified
ChenMnZ
commited on
Jul 22