Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
astronomer
/
Llama-3-8B-GPTQ-4-Bit
like
6
Follow
Astronomer
5
Text Generation
Transformers
Safetensors
wikitext
llama
llama-3
facebook
meta
astronomer
gptq
pretrained
quantized
finetuned
Inference Endpoints
text-generation-inference
4-bit precision
arxiv:
2210.17323
License:
llama-3
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
32b68d7
Llama-3-8B-GPTQ-4-Bit
/
quantize_config.json
Commit History
Upload folder using huggingface_hub
ff2451b
verified
davidxmle
commited on
Apr 21