File size: 2,762 Bytes
1151de2 e12c0fb 74d9e0d 28d1054 74d9e0d df49e8d 28d1054 8fb4b85 28d1054 8fb4b85 74d9e0d 3e02683 74d9e0d df49e8d 74d9e0d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
---
license: apache-2.0
language:
- it
- en
library_name: transformers
tags:
- sft
- it
- mistral
- chatml
---
# Model Information
XXXX is an updated version of Mistral-7B-v0.2, specifically fine-tuned with SFT and LoRA adjustments.
- It's trained both on publicly available datasets, like SQUAD-it, and datasets we've created in-house.
- it's designed to understand and maintain context, making it ideal for Retrieval Augmented Generation (RAG) tasks and applications requiring contextual awareness.
# Evaluation
We evaluated the model using the same test sets as used for the Open Ita LLM Leaderboard
| hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average |
|:----------------------| :--------------- | :-------------------- | :------- |
| 0.6067 | 0.4405 | 0.5112 | 0,52 |
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
model = AutoModelForCausalLM.from_pretrained("MoxoffSpA/xxxx")
tokenizer = AutoTokenizer.from_pretrained("MoxoffSpA/xxxx")
messages = [
{"role": "user", "content": "Qual è il tuo piatto preferito??"},
{"role": "assistant", "content": "Beh, ho un debole per una buona porzione di risotto allo zafferano. È un piatto che si distingue per il suo sapore ricco e il suo bellissimo colore dorato, rendendolo irresistibile!"},
{"role": "user", "content": "Hai delle ricette con il risotto che consigli?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Bias, Risks and Limitations
xxxx has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of
responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition
of the corpus was used to train the base model (mistralai/Mistral-7B-v0.2), however it is likely to have included a mix of Web data and technical sources
like books and code.
## Links to resources
SQUAD-it dataset: https://huggingface.co/datasets/squad_it
Mistral_7B_v0.2: original weights: https://models.mistralcdn.com/mistral-7b-v0-2/mistral-7B-v0.2.tar
model: https://huggingface.co/alpindale/Mistral-7B-v0.2-hf
Open Ita LLM Leaderbord: https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard
## Quantized versions
We have published as well the 4 bit and 8 bit versions of this model:
https://huggingface.co/MoxoffSpA/xxxxQuantized/main
## The Moxoff Team
Marco D'Ambra, Jacopo Abate, Gianpaolo Francesco Trotta |