--- base_model: - nazimali/Mistral-Nemo-Kurdish base_model_relation: finetune language: - ku - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf datasets: - saillab/alpaca-kurdish_kurmanji-cleaned library_name: transformers ---
ئەمە مۆدێلێکی پارامێتری 12B یە، وردکراوە لەسەر نازیماڵی/میستراڵ-نیمۆ-کوردی بۆ یەک داتا سێتی ڕێنمایی کوردی (کرمانجی). مەبەستم ئەوە بوو کە ئەمە بە هەردوو ڕێنووسی کوردی کرمانجی لاتینی و کوردی سۆرانی عەرەبی ڕابهێنم، بەڵام کاتی ڕاهێنان زۆر لەوە زیاتر بوو کە پێشبینی دەکرا. بۆیە بڕیارمدا 1 داتا سێتی کوردی کورمانجی تەواو بەکاربهێنم بۆ دەستپێکردن. سەیری ڕێکخستنی ڕاهێنانی فرە GPU دەکات بۆیە پێویست ناکات بە درێژایی ڕۆژ چاوەڕێی ئەنجامەکان بکەیت. دەتەوێت بە هەردوو ڕێنووسی عەرەبی کرمانجی و سۆرانی ڕاهێنانی پێبکەیت. نموونەی دیمۆی بۆشاییەکان تاقی بکەرەوە.
This is a 12B parameter model, finetuned on `nazimali/Mistral-Nemo-Kurdish` for a single Kurdish (Kurmanji) instruction dataset. My intention was to train this with both Kurdish Kurmanji Latin script and Kurdish Sorani Arabic script, but training time was much longer than anticipated. So I decided to use 1 full Kurdish Kurmanji dataset to get started. Will look into a multi-GPU training setup so don't have to wait all day for results. Want to train it with both Kurmanji and Sorani Arabic script. Try [spaces demo](https://huggingface.co/spaces/nazimali/Mistral-Nemo-Kurdish-Instruct) example. ### Example usage #### llama-cpp-python ```python from llama_cpp import Llama inference_prompt = """Li jêr rêwerzek heye ku peywirek rave dike, bi têketinek ku çarçoveyek din peyda dike ve tê hev kirin. Bersivek ku daxwazê ​​bi guncan temam dike binivîsin. ### Telîmat: {} ### Têketin: {} ### Bersiv: """ llm = Llama.from_pretrained( repo_id="nazimali/Mistral-Nemo-Kurdish-Instruct", filename="Q4_K_M.gguf", ) llm.create_chat_completion( messages = [ { "role": "user", "content": inference_prompt.format("سڵاو ئەلیکوم، چۆنیت؟") } ] ) ``` #### llama.cpp ```shell ./llama-cli \ --hf-repo "nazimali/Mistral-Nemo-Kurdish-Instruct" \ --hf-file Q4_K_M.gguf \ -p "selam alikum, tu çawa yî?" \ --conversation ``` #### Transformers ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig infer_prompt = """Li jêr rêwerzek heye ku peywirek rave dike, bi têketinek ku çarçoveyek din peyda dike ve tê hev kirin. Bersivek ku daxwazê ​​bi guncan temam dike binivîsin. ### Telîmat: {} ### Têketin: {} ### Bersiv: """ model_id = "nazimali/Mistral-Nemo-Kurdish-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, ) model = AutoModelForCausalLM.from_pretrained( model_id, quantization_config=bnb_config, device_map="auto", ) model.eval() def call_llm(user_input, instructions=None): instructions = instructions or "tu arîkarek alîkar î" prompt = infer_prompt.format(instructions, user_input) input_ids = tokenizer( prompt, return_tensors="pt", add_special_tokens=False, return_token_type_ids=False, ).to("cuda") with torch.inference_mode(): generated_ids = model.generate( **input_ids, max_new_tokens=120, do_sample=True, temperature=0.7, top_p=0.7, num_return_sequences=1, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id, ) decoded_output = tokenizer.batch_decode(generated_ids)[0] return decoded_output.replace(prompt, "").replace("", "") response = call_llm("سڵاو ئەلیکوم، چۆنیت؟") print(response) ``` ### Training Transformers `4.44.2` 1 NVIDIA A40 Duration 7h 41m 12s ```json { "total_flos": 2225817933447045000, "train/epoch": 0.9998075072184792, "train/global_step": 2597, "train/grad_norm": 1.172538161277771, "train/learning_rate": 0, "train/loss": 0.7774, "train_loss": 0.892096030377038, "train_runtime": 27479.3172, "train_samples_per_second": 1.512, "train_steps_per_second": 0.095 } ``` #### Finetuning data: - `saillab/alpaca-kurdish_kurmanji-cleaned` - Dataset number of rows: 52,002 - Filtered columns `instruction, output` - Must have at least 1 character - Must be less than 10,000 characters - Number of rows used for training: 41,559 #### Finetuning instruction format: ```python finetune_prompt = """Li jêr rêwerzek heye ku peywirek rave dike, bi têketinek ku çarçoveyek din peyda dike ve tê hev kirin. Bersivek ku daxwazê ​​bi guncan temam dike binivîsin. ### Telîmat: {} ### Têketin: {} ### Bersiv: {} """ ```