--- language: - ru datasets: - IlyaGusev/saiga_scored - IlyaGusev/saiga_preferences license: gemma --- ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ) # QuantFactory/saiga_gemma2_9b-GGUF This is quantized version of [IlyaGusev/saiga_gemma2_9b](https://huggingface.co/IlyaGusev/saiga_gemma2_9b) created using llama.cpp # Original Model Card # Saiga/Gemma2 9B, Russian Gemma-2-based chatbot Based on [Gemma-2 9B Instruct](https://huggingface.co/google/gemma-2-9b-it). ## Prompt format Gemma-2 prompt format: ``` system Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им. user Как дела? model Отлично, а у тебя? user Шикарно. Как пройти в библиотеку? model ``` ## Code example ```python # Исключительно ознакомительный пример. # НЕ НАДО ТАК ИНФЕРИТЬ МОДЕЛЬ В ПРОДЕ. # См. https://github.com/vllm-project/vllm или https://github.com/huggingface/text-generation-inference import torch from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig MODEL_NAME = "IlyaGusev/saiga_gemma2_10b" model = AutoModelForCausalLM.from_pretrained( MODEL_NAME, load_in_8bit=True, torch_dtype=torch.bfloat16, device_map="auto" ) model.eval() tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) generation_config = GenerationConfig.from_pretrained(MODEL_NAME) print(generation_config) inputs = ["Почему трава зеленая?", "Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч"] for query in inputs: prompt = tokenizer.apply_chat_template([{ "role": "user", "content": query }], tokenize=False, add_generation_prompt=True) data = tokenizer(prompt, return_tensors="pt", add_special_tokens=False) data = {k: v.to(model.device) for k, v in data.items()} output_ids = model.generate(**data, generation_config=generation_config)[0] output_ids = output_ids[len(data["input_ids"][0]):] output = tokenizer.decode(output_ids, skip_special_tokens=True).strip() print(query) print(output) print() print("==============================") print() ``` ## Versions v2: - [258869abdf95aca1658b069bcff69ea6d2299e7f](https://huggingface.co/IlyaGusev/saiga_gemma2_9b/commit/258869abdf95aca1658b069bcff69ea6d2299e7f) - Other name: saiga_gemma2_9b_abliterated_sft_m3_d9_abliterated_kto_m1_d13 - SFT dataset config: [sft_d9.json](https://github.com/IlyaGusev/saiga/blob/main/configs/datasets/sft_d9.json) - SFT model config: [saiga_gemma2_9b_sft_m2.json](https://github.com/IlyaGusev/saiga/blob/main/configs/models/saiga_gemma2_9b_sft_m3.json) - KTO dataset config: [pref_d11.json](https://github.com/IlyaGusev/saiga/blob/main/configs/datasets/pref_d13.json) - KTO model config: [saiga_gemma2_9b_kto_m1.json](https://github.com/IlyaGusev/saiga/blob/main/configs/models/saiga_gemma2_9b_kto_m1.json) - SFT wandb: [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/pjsuik1l) - KTO wandb: [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/dsxwvyyx) v1: - [fa63cfe898ee6372419b8e38d35f4c41756d2c22](https://huggingface.co/IlyaGusev/saiga_gemma2_9b/commit/fa63cfe898ee6372419b8e38d35f4c41756d2c22) - Other name: saiga_gemma2_9b_abliterated_sft_m2_d9_abliterated_kto_m1_d11 - SFT dataset config: [sft_d9.json](https://github.com/IlyaGusev/saiga/blob/main/configs/datasets/sft_d9.json) - SFT model config: [saiga_gemma2_9b_sft_m2.json](https://github.com/IlyaGusev/saiga/blob/main/configs/models/saiga_gemma2_9b_sft_m2.json) - KTO dataset config: [pref_d11.json](https://github.com/IlyaGusev/saiga/blob/main/configs/datasets/pref_d11.json) - KTO model config: [saiga_gemma2_9b_kto_m1.json](https://github.com/IlyaGusev/saiga/blob/main/configs/models/saiga_gemma2_9b_kto_m1.json) - SFT wandb: [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/af49qmbb) - KTO wandb: [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/5bt7729x) ## Evaluation * Dataset: https://github.com/IlyaGusev/rulm/blob/master/self_instruct/data/tasks.jsonl * Framework: https://github.com/tatsu-lab/alpaca_eval * Evaluator: alpaca_eval_cot_gpt4_turbo_fn Pivot: gemma_2_9b_it_abliterated | model | length_controlled_winrate | win_rate | standard_error | avg_length | |-----|-----|-----|-----|-----| |gemma_2_9b_it_abliterated | 50.00 | 50.00 | 0.00 | 1126 | |saiga_gemma2_9b, v1 | 48.66 | 45.54 | 2.45 | 1066 | |saiga_gemms2_9b, v2 | 47.77 | 45.30 | 2.45 | 1074 |