# *This is a 4-bit quantized ggml file for use with llama.cpp on CPU* | |
# GPT4 x Alpaca | |
(https://huggingface.co/chavinlo/gpt4-x-alpaca) | |
As a base model we used: https://huggingface.co/chavinlo/alpaca-13b | |
Finetuned on GPT4's responses, for 3 epochs. | |
NO LORA |