OpenLLaMA 3B 350 bt and 600 bt converted to GGML format and quantized in q4_0 for inference using llama.cpp.
-