File size: 259 Bytes
306aabf
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
# *This is a 4-bit quantized ggml file for use with llama.cpp on CPU*

# GPT4 x Alpaca
(https://huggingface.co/chavinlo/gpt4-x-alpaca)


As a base model we used: https://huggingface.co/chavinlo/alpaca-13b

Finetuned on GPT4's responses, for 3 epochs.

NO LORA