marcodambra
commited on
Commit
•
5d0e2b2
1
Parent(s):
3e02683
Update README.md
Browse files
README.md
CHANGED
@@ -13,10 +13,11 @@ tags:
|
|
13 |
|
14 |
# Model Information
|
15 |
|
16 |
-
XXXX is an updated version of Mistral-7B-v0.2, specifically fine-tuned with SFT and LoRA adjustments.
|
17 |
|
18 |
-
- It's trained both on publicly available datasets, like SQUAD-it, and datasets we've created in-house.
|
19 |
- it's designed to understand and maintain context, making it ideal for Retrieval Augmented Generation (RAG) tasks and applications requiring contextual awareness.
|
|
|
20 |
# Evaluation
|
21 |
|
22 |
We evaluated the model using the same test sets as used for the Open Ita LLM Leaderboard
|
@@ -53,22 +54,24 @@ print(decoded[0])
|
|
53 |
```
|
54 |
|
55 |
## Bias, Risks and Limitations
|
|
|
56 |
xxxx has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of
|
57 |
responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition
|
58 |
of the corpus was used to train the base model (mistralai/Mistral-7B-v0.2), however it is likely to have included a mix of Web data and technical sources
|
59 |
like books and code.
|
60 |
|
61 |
-
|
62 |
## Links to resources
|
63 |
-
SQUAD-it dataset: https://huggingface.co/datasets/squad_it
|
64 |
-
Mistral_7B_v0.2: original weights: https://models.mistralcdn.com/mistral-7b-v0-2/mistral-7B-v0.2.tar
|
65 |
-
model: https://huggingface.co/alpindale/Mistral-7B-v0.2-hf
|
66 |
-
Open Ita LLM Leaderbord: https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard
|
67 |
|
|
|
|
|
|
|
|
|
68 |
|
69 |
## Quantized versions
|
|
|
70 |
We have published as well the 4 bit and 8 bit versions of this model:
|
71 |
https://huggingface.co/MoxoffSpA/xxxxQuantized/main
|
72 |
|
73 |
## The Moxoff Team
|
|
|
74 |
Marco D'Ambra, Jacopo Abate, Gianpaolo Francesco Trotta
|
|
|
13 |
|
14 |
# Model Information
|
15 |
|
16 |
+
XXXX is an updated version of [Mistral-7B-v0.2](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf), specifically fine-tuned with SFT and LoRA adjustments.
|
17 |
|
18 |
+
- It's trained both on publicly available datasets, like [SQUAD-it](https://huggingface.co/datasets/squad_it), and datasets we've created in-house.
|
19 |
- it's designed to understand and maintain context, making it ideal for Retrieval Augmented Generation (RAG) tasks and applications requiring contextual awareness.
|
20 |
+
|
21 |
# Evaluation
|
22 |
|
23 |
We evaluated the model using the same test sets as used for the Open Ita LLM Leaderboard
|
|
|
54 |
```
|
55 |
|
56 |
## Bias, Risks and Limitations
|
57 |
+
|
58 |
xxxx has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of
|
59 |
responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition
|
60 |
of the corpus was used to train the base model (mistralai/Mistral-7B-v0.2), however it is likely to have included a mix of Web data and technical sources
|
61 |
like books and code.
|
62 |
|
|
|
63 |
## Links to resources
|
|
|
|
|
|
|
|
|
64 |
|
65 |
+
- SQUAD-it dataset: https://huggingface.co/datasets/squad_it
|
66 |
+
- Mistral_7B_v0.2 original weights: https://models.mistralcdn.com/mistral-7b-v0-2/mistral-7B-v0.2.tar
|
67 |
+
- Mistral_7B_v0.2 model: https://huggingface.co/alpindale/Mistral-7B-v0.2-hf
|
68 |
+
- Open Ita LLM Leaderbord: https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard
|
69 |
|
70 |
## Quantized versions
|
71 |
+
|
72 |
We have published as well the 4 bit and 8 bit versions of this model:
|
73 |
https://huggingface.co/MoxoffSpA/xxxxQuantized/main
|
74 |
|
75 |
## The Moxoff Team
|
76 |
+
|
77 |
Marco D'Ambra, Jacopo Abate, Gianpaolo Francesco Trotta
|