This repo contains a low-rank adapter for LLaMA-7b fit on the translated Stanford Alpaca dataset. Model was fine-tuned for Polish language. To run it go to it's github repo. Translated Stanford Alpaca dataset is here

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train Lbuk/alpaca-koza-7b