A 4-bit, 128g, act_order=True GPTQ quantisation of JackFram/llama-68m, a 68 million parameter Llama model; created on request for software testing. Not for normal usage!