Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Meltemi-7B-Instruct-v1 - GGUF - Model creator: https://huggingface.co/ilsp/ - Original model: https://huggingface.co/ilsp/Meltemi-7B-Instruct-v1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Meltemi-7B-Instruct-v1.Q2_K.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q2_K.gguf) | Q2_K | 2.66GB | | [Meltemi-7B-Instruct-v1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.IQ3_XS.gguf) | IQ3_XS | 2.95GB | | [Meltemi-7B-Instruct-v1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.IQ3_S.gguf) | IQ3_S | 3.11GB | | [Meltemi-7B-Instruct-v1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q3_K_S.gguf) | Q3_K_S | 3.09GB | | [Meltemi-7B-Instruct-v1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.IQ3_M.gguf) | IQ3_M | 3.2GB | | [Meltemi-7B-Instruct-v1.Q3_K.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q3_K.gguf) | Q3_K | 3.42GB | | [Meltemi-7B-Instruct-v1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q3_K_M.gguf) | Q3_K_M | 3.42GB | | [Meltemi-7B-Instruct-v1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q3_K_L.gguf) | Q3_K_L | 3.7GB | | [Meltemi-7B-Instruct-v1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.IQ4_XS.gguf) | IQ4_XS | 3.83GB | | [Meltemi-7B-Instruct-v1.Q4_0.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q4_0.gguf) | Q4_0 | 3.98GB | | [Meltemi-7B-Instruct-v1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.IQ4_NL.gguf) | IQ4_NL | 4.03GB | | [Meltemi-7B-Instruct-v1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q4_K_S.gguf) | Q4_K_S | 4.01GB | | [Meltemi-7B-Instruct-v1.Q4_K.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q4_K.gguf) | Q4_K | 4.22GB | | [Meltemi-7B-Instruct-v1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q4_K_M.gguf) | Q4_K_M | 4.22GB | | [Meltemi-7B-Instruct-v1.Q4_1.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q4_1.gguf) | Q4_1 | 4.4GB | | [Meltemi-7B-Instruct-v1.Q5_0.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q5_0.gguf) | Q5_0 | 4.83GB | | [Meltemi-7B-Instruct-v1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q5_K_S.gguf) | Q5_K_S | 4.83GB | | [Meltemi-7B-Instruct-v1.Q5_K.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q5_K.gguf) | Q5_K | 4.95GB | | [Meltemi-7B-Instruct-v1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q5_K_M.gguf) | Q5_K_M | 4.95GB | | [Meltemi-7B-Instruct-v1.Q5_1.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q5_1.gguf) | Q5_1 | 5.25GB | | [Meltemi-7B-Instruct-v1.Q6_K.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q6_K.gguf) | Q6_K | 5.72GB | | [Meltemi-7B-Instruct-v1.Q8_0.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q8_0.gguf) | Q8_0 | 7.41GB | Original model description: --- license: apache-2.0 language: - el - en tags: - finetuned inference: true pipeline_tag: text-generation --- # 🚨 NEWER VERSION AVAILABLE ## **This model has been superseded by a newer version (v1.5) [here](https://huggingface.co/ilsp/Meltemi-7B-Instruct-v1.5)** # Meltemi Instruct Large Language Model for the Greek language We present Meltemi-7B-Instruct-v1 Large Language Model (LLM), an instruct fine-tuned version of [Meltemi-7B-v1](https://huggingface.co/ilsp/Meltemi-7B-v1). # Model Information - Vocabulary extension of the Mistral-7b tokenizer with Greek tokens - 8192 context length - Fine-tuned with 100k Greek machine translated instructions extracted from: * [Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) (only subsets with permissive licenses) * [Evol-Instruct](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k) * [Capybara](https://huggingface.co/datasets/LDJnr/Capybara) * A hand-crafted Greek dataset with multi-turn examples steering the instruction-tuned model towards safe and harmless responses - Our SFT procedure is based on the [Hugging Face finetuning recipes](https://github.com/huggingface/alignment-handbook) # Instruction format The prompt format is the same as the [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) format and can be utilized through the tokenizer's [chat template](https://huggingface.co/docs/transformers/main/chat_templating) functionality as follows: ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("ilsp/Meltemi-7B-Instruct-v1") tokenizer = AutoTokenizer.from_pretrained("ilsp/Meltemi-7B-Instruct-v1") model.to(device) messages = [ {"role": "system", "content": "Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη."}, {"role": "user", "content": "Πες μου αν έχεις συνείδηση."}, ] # Through the default chat template this translates to # # <|system|> # Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη. # <|user|> # Πες μου αν έχεις συνείδηση. # <|assistant|> # prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False) input_prompt = tokenizer(prompt, return_tensors='pt').to(device) outputs = model.generate(input_prompt['input_ids'], max_new_tokens=256, do_sample=True) print(tokenizer.batch_decode(outputs)[0]) # Ως μοντέλο γλώσσας AI, δεν έχω τη δυνατότητα να αντιληφθώ ή να βιώσω συναισθήματα όπως η συνείδηση ή η επίγνωση. Ωστόσο, μπορώ να σας βοηθήσω με οποιεσδήποτε ερωτήσεις μπορεί να έχετε σχετικά με την τεχνητή νοημοσύνη και τις εφαρμογές της. messages.extend([ {"role": "assistant", "content": tokenizer.batch_decode(outputs)[0]}, {"role": "user", "content": "Πιστεύεις πως οι άνθρωποι πρέπει να φοβούνται την τεχνητή νοημοσύνη;"} ]) # Through the default chat template this translates to # # <|system|> # Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη. # <|user|> # Πες μου αν έχεις συνείδηση. # <|assistant|> # Ως μοντέλο γλώσσας AI, δεν έχω τη δυνατότητα να αντιληφθώ ή να βιώσω συναισθήματα όπως η συνείδηση ή η επίγνωση. Ωστόσο, μπορώ να σας βοηθήσω με οποιεσδήποτε ερωτήσεις μπορεί να έχετε σχετικά με την τεχνητή νοημοσύνη και τις εφαρμογές της. # <|user|> # Πιστεύεις πως οι άνθρωποι πρέπει να φοβούνται την τεχνητή νοημοσύνη; # <|assistant|> # prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False) input_prompt = tokenizer(prompt, return_tensors='pt').to(device) outputs = model.generate(input_prompt['input_ids'], max_new_tokens=256, do_sample=True) print(tokenizer.batch_decode(outputs)[0]) ``` Please make sure that the BOS token is always included in the tokenized prompts. This might not be the default setting in all evaluation or fine-tuning frameworks. # Evaluation The evaluation suite we created includes 6 test sets. The suite is integrated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). Our evaluation suite includes: * Four machine-translated versions ([ARC Greek](https://huggingface.co/datasets/ilsp/arc_greek), [Truthful QA Greek](https://huggingface.co/datasets/ilsp/truthful_qa_greek), [HellaSwag Greek](https://huggingface.co/datasets/ilsp/hellaswag_greek), [MMLU Greek](https://huggingface.co/datasets/ilsp/mmlu_greek)) of established English benchmarks for language understanding and reasoning ([ARC Challenge](https://arxiv.org/abs/1803.05457), [Truthful QA](https://arxiv.org/abs/2109.07958), [Hellaswag](https://arxiv.org/abs/1905.07830), [MMLU](https://arxiv.org/abs/2009.03300)). * An existing benchmark for question answering in Greek ([Belebele](https://arxiv.org/abs/2308.16884)) * A novel benchmark created by the ILSP team for medical question answering based on the medical exams of [DOATAP](https://www.doatap.gr) ([Medical MCQA](https://huggingface.co/datasets/ilsp/medical_mcqa_greek)). Our evaluation for Meltemi-7b is performed in a few-shot setting, consistent with the settings in the [Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We can see that our training enhances performance across all Greek test sets by a **+14.9%** average improvement. The results for the Greek test sets are shown in the following table: | | Medical MCQA EL (15-shot) | Belebele EL (5-shot) | HellaSwag EL (10-shot) | ARC-Challenge EL (25-shot) | TruthfulQA MC2 EL (0-shot) | MMLU EL (5-shot) | Average | |----------------|----------------|-------------|--------------|------------------|-------------------|---------|---------| | Mistral 7B | 29.8% | 45.0% | 36.5% | 27.1% | 45.8% | 35% | 36.5% | | Meltemi 7B | 41.0% | 63.6% | 61.6% | 43.2% | 52.1% | 47% | 51.4% | # Ethical Considerations This model has not been aligned with human preferences, and therefore might generate misleading, harmful, and toxic content. # Acknowledgements The ILSP team utilized Amazon’s cloud computing services, which were made available via GRNET under the [OCRE Cloud framework](https://www.ocre-project.eu/), providing Amazon Web Services for the Greek Academic and Research Community. # Citation ``` @misc{voukoutis2024meltemiopenlargelanguage, title={Meltemi: The first open Large Language Model for Greek}, author={Leon Voukoutis and Dimitris Roussis and Georgios Paraskevopoulos and Sokratis Sofianopoulos and Prokopis Prokopidis and Vassilis Papavasileiou and Athanasios Katsamanis and Stelios Piperidis and Vassilis Katsouros}, year={2024}, eprint={2407.20743}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2407.20743}, } ```