Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
TheBlokeAI
/
jackfram_llama-68m-GPTQ
like
0
Follow
TheBlokeAI
114
Text Generation
Transformers
Safetensors
llama
text-generation-inference
Inference Endpoints
4-bit precision
gptq
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
b415e46
jackfram_llama-68m-GPTQ
/
README.md
TheBloke
Update README.md
b415e46
about 1 year ago
preview
code
|
raw
Copy download link
history
blame
Safe
171 Bytes
A 4-bit, 128g, act
_order=True GPTQ quantisation of JackFram/llama-68m, a 68 million parameter Llama1 model; created on request for software testing.
Not for normal usage!