File size: 5,076 Bytes
0a78246 d0547db eec9633 afff2b0 eec9633 beb6814 eec9633 beb6814 eec9633 ebd6241 eec9633 beb6814 eec9633 beb6814 eec9633 afff2b0 eec9633 beb6814 eec9633 ebd6241 eec9633 efe59db afff2b0 eec9633 beb6814 eec9633 6845cfc eec9633 edb807d eec9633 beb6814 eec9633 ebd6241 eec9633 beb6814 eec9633 beb6814 eec9633 beb6814 eec9633 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 |
---
license: other
datasets:
- raicrits/Orca_ITA_200k
language:
- it
pipeline_tag: text-generation
tags:
- LLM
- Italian
- Orca
- Hermes
- LLama2
library_name: transformers
---
# Model Card for Model raicrits/Hermes7b_ITA
<!-- Provide a quick summary of what the model is/does. -->
An open-source LLaMa2 language model of 7b parameters fine-tuned (using as base model [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b)) to follow instructions in italian.
### Model Description
This model is a LLM of 7b parameters based on [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b), a version of [meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b) fine-tuned to follow instructions.
The model was further fine-tuned in order to follow instructions in italian, using [LoRA](https://arxiv.org/abs/2106.09685) approach and a dataset of 120k random pairs of instruction/answer from [raicrits/Orca_ITA_200k](https://huggingface.co/datasets/raicrits/Orca_ITA_200k).
This repository contains the model weights merged with the LoRA adapters obtained in the fine-tuning procedure.
- **Developed by:** Stefano Scotta (stefano.scotta@rai.it)
- **Model type:** LLM fine-tuned to follow instructions
- **Language(s) (NLP):** Italian
- **License:** Other
- **Finetuned from model:** [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The model can be used as is to respond to simple instructions in Italian or can be further fine-tuned to perform specific tasks.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
As any other LLM it is possible that the model generates content which does not correspond to the reality as well as wrong, biased, offensive and inappropriate answers.
## How to Get Started with the Model
**Prompt template:**
``` python
"""### Instruction: {instruction}
### Response:
"""
```
**Usage:**
Use the code below to get started with the model.
``` python
import os
import torch
import sys
from transformers import LlamaForCausalLM, AutoTokenizer
def generate_prompt_test(instruction):
prompt = f"""### Instruction: {instruction}
### Response:
"""
return prompt
model_name = "raicrits/Hermes7b_ITA"
model = LlamaForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.bfloat16
)
model.config.use_cache = True
tokenizer = AutoTokenizer.from_pretrained(model_name, add_eos_token=False)
prompt = generate_prompt_test("Cosa puoi dirmi sul dio Hermes?")
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, do_sample = True, num_beams = 2, top_k=50, top_p= 0.95, max_new_tokens=256, early_stopping = True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True).split("Response:")[1].strip())
```
``` python
"""Hermes è un dio dell'antica Grecia. Era il dio del commercio, della comunicazione e del trasporto. Era anche il dio della mente e della intelligenza. Era noto per il suo eloquente linguaggio e la sua capacità di spostarsi velocemente. Era considerato il messaggero degli dèi e spesso veniva raffigurato con un cappello di pelle di capra e sandali."""
```
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The model was fine-tuned on 120k random records of [raicrits/Orca_ITA_200k](https://huggingface.co/datasets/raicrits/Orca_ITA_200k).
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
The fine-tuning procedure was done using [LoRA](https://arxiv.org/abs/2106.09685) approach.
#### Training Hyperparameters
**Training setting:**
- train epochs=3,
- learning_rate=2e-4,
- mixed precision training: float16
**LoRA configuration:**
- r= 8
- lora_alpha=16
- target_modules=["q_proj","v_proj"]
- lora_dropout=0.05
- bias="none"
- task_type=TaskType.CAUSAL_LM
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 1 NVIDIA A100/40Gb
- **Hours used:** 78
- **Cloud Provider:** Private Infrastructure
- **Carbon Emitted:** 8.42 kg eq. CO2
## Model Card Authors
Stefano Scotta (stefano.scotta@rai.it)
## Model Card Contact
stefano.scotta@rai.it |