This repo contains a low-rank adapter for LLaMA-7b fit on the translated Stanford Alpaca dataset. Model was fine-tuned for Polish language. To run it go to it's github repo. Translated Stanford Alpaca dataset is here

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Dataset used to train Lbuk/alpaca-koza-7b