Bradarr's picture
Add gpt4-x-alpaca info
306aabf
|
raw
history blame contribute delete
No virus
259 Bytes

This is a 4-bit quantized ggml file for use with llama.cpp on CPU

GPT4 x Alpaca

(https://huggingface.co/chavinlo/gpt4-x-alpaca)

As a base model we used: https://huggingface.co/chavinlo/alpaca-13b

Finetuned on GPT4's responses, for 3 epochs.

NO LORA