--- license: other datasets: - raicrits/Orca_ITA_200k language: - it pipeline_tag: text-generation tags: - LLM - Italian - Orca - Hermes - LLama2 --- # Model Card for Model raicrits/Hermes7b_ITA_v1 An open-source LLaMa2 language model of 7b parameters fine-tuned (using as base model [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b)) to follow instructions in italian. ### Model Description This model is a LLM of 7b parameters based on [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b), a version of [meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b) fine-tuned to follow instructions. The model was further fine-tuned in order to follow instructions in italian, using [LoRA](https://arxiv.org/abs/2106.09685) approach and a dataset of 120k random pairs of instruction/answer from [raicrits/Orca_ITA_200k](https://huggingface.co/datasets/raicrits/Orca_ITA_200k). This repository contains the model weights merged with the LoRA adapters obtained in the fine-tuning procedure. - **Developed by:** Stefano Scotta (stefano.scotta@rai.it) - **Model type:** LLM fine-tuned to follow instructions - **Language(s) (NLP):** Italian - **License:** Other - **Finetuned from model:** [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) ## Uses The model can be used as is to respond to simple instructions in Italian or can be further fine-tuned to perform specific tasks. ## Bias, Risks, and Limitations As any other LLM it is possible that the model generates content which does not correspond to the reality as well as wrong, biased, offensive and inappropriate answers. ## How to Get Started with the Model **Prompt template:** ``` python """### Instruction: {instruction} ### Response: """ ``` **Usage:** Use the code below to get started with the model. ``` python import os import torch import sys from transformers import LlamaForCausalLM, AutoTokenizer def generate_prompt_test(instruction): prompt = f"""### Instruction: {instruction} ### Response: """ return prompt model_name = "raicrits/Hermes7b_ITA_v1" model = LlamaForCausalLM.from_pretrained( model_name, device_map="auto", torch_dtype=torch.bfloat16 ) tokenizer = AutoTokenizer.from_pretrained("Hermes_ITA_Lora_merged_V2", add_eos_token=False) prompt = generate_prompt_test("Cosa puoi dirmi sul dio Hermes?") inputs = tokenizer(prompt, return_tensors="pt").to("cuda") outputs = model.generate(**inputs, do_sample = True, num_beams = 2, top_k=50, top_p= 0.95, max_new_tokens=256, early_stopping = True) print(tokenizer.decode(outputs[0], skip_special_tokens=True).split("Response:")[1].strip()) ``` ``` python """Hermes è un dio dell'antica Grecia. Era il dio del commercio, della comunicazione e del trasporto. Era anche il dio della mente e della intelligenza. Era noto per il suo eloquente linguaggio e la sua capacità di spostarsi velocemente. Era considerato il messaggero degli dèi e spesso veniva raffigurato con un cappello di pelle di capra e sandali.""" ``` ## Training Details ### Training Data The model was fine-tinuned on 120k random records of [raicrits/Orca_ITA_200k](https://huggingface.co/datasets/raicrits/Orca_ITA_200k). ### Training Procedure The fine-tuning procedure was done using [LoRA](https://arxiv.org/abs/2106.09685) approach. #### Training Hyperparameters **Training setting:** - train epochs=3, - learning_rate=2e-4, - mixed precision training: float16 **LoRA configuration:** - r= 8 - lora_alpha=16 - target_modules=["q_proj","v_proj"] - lora_dropout=0.05 - bias="none" - task_type=TaskType.CAUSAL_LM ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 1 NVIDIA A100/40Gb - **Hours used:** 78 - **Cloud Provider:** Private Infrastructure - **Carbon Emitted:** 8.42 kg eq. CO2 ## Model Card Authors Stefano Scotta (stefano.scotta@rai.it) ## Model Card Contact stefano.scotta@rai.it