stefanoscotta
commited on
Commit
•
8be35b4
1
Parent(s):
d0c39e4
Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ An open-source LLaMa language model of 13b parameters fine-tuned to follow instr
|
|
16 |
|
17 |
This model is an open-source LLM of 13b parameters based on [OpenLLaMA](https://github.com/openlm-research/open_llama), an open-source replica of Meta AI's LLaMA.
|
18 |
The model was fine-tuned in order to follow instructions, as proposed in [Alpaca](https://github.com/tatsu-lab/stanford_alpaca),
|
19 |
-
but using [LoRA](https://arxiv.org/pdf/2106.09685.pdf) technique and a bigger dataset of instruction/answers in italian, [cosimoiaia/Loquace-102k](cosimoiaia/Loquace-102k).
|
20 |
|
21 |
This repository contains the model merged with the LoRA adapters obtained in the fine-tuning procedure.
|
22 |
|
@@ -25,7 +25,7 @@ This repository contains the model merged with the LoRA adapters obtained in the
|
|
25 |
- **Model type:** LLM fine-tuned to follow instructions
|
26 |
- **Language(s) (NLP):** Italian
|
27 |
- **License:** [More Information Needed]
|
28 |
-
- **Finetuned from model:** [openlm-research/open_llama_13b](openlm-research/open_llama_13b)
|
29 |
|
30 |
|
31 |
## Uses
|
@@ -73,7 +73,7 @@ Use the code below to get started with the model.
|
|
73 |
### Training Data
|
74 |
|
75 |
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
76 |
-
The model was fine-tinuned on [cosimoiaia/Loquace-102k](cosimoiaia/Loquace-102k), a dataset of 102k question/answer pairs in italian.
|
77 |
|
78 |
|
79 |
### Training Procedure
|
|
|
16 |
|
17 |
This model is an open-source LLM of 13b parameters based on [OpenLLaMA](https://github.com/openlm-research/open_llama), an open-source replica of Meta AI's LLaMA.
|
18 |
The model was fine-tuned in order to follow instructions, as proposed in [Alpaca](https://github.com/tatsu-lab/stanford_alpaca),
|
19 |
+
but using [LoRA](https://arxiv.org/pdf/2106.09685.pdf) technique and a bigger dataset of instruction/answers in italian, [cosimoiaia/Loquace-102k](https://huggingface.co/datasets/cosimoiaia/Loquace-102k/viewer/cosimoiaia--Loquace-102k).
|
20 |
|
21 |
This repository contains the model merged with the LoRA adapters obtained in the fine-tuning procedure.
|
22 |
|
|
|
25 |
- **Model type:** LLM fine-tuned to follow instructions
|
26 |
- **Language(s) (NLP):** Italian
|
27 |
- **License:** [More Information Needed]
|
28 |
+
- **Finetuned from model:** [openlm-research/open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b)
|
29 |
|
30 |
|
31 |
## Uses
|
|
|
73 |
### Training Data
|
74 |
|
75 |
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
76 |
+
The model was fine-tinuned on [cosimoiaia/Loquace-102k](https://huggingface.co/datasets/cosimoiaia/Loquace-102k/viewer/cosimoiaia--Loquace-102k), a dataset of 102k question/answer pairs in italian.
|
77 |
|
78 |
|
79 |
### Training Procedure
|