chiyuzhang
commited on
Commit
•
9eec408
1
Parent(s):
22e8baa
Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
16 |
|
17 |
# LaMini-FLAN-T5-Small
|
18 |
|
19 |
-
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on [LaMini dataset]() that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository]().
|
20 |
|
21 |
## Training Procedure
|
22 |
We initialize with [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) and fine-tune it on our [LaMini dataset](). Its total number of parameters is 61M.
|
@@ -127,7 +127,7 @@ You can download LaMini model series as follow. Note that not all models are per
|
|
127 |
## Use
|
128 |
|
129 |
### Intended use
|
130 |
-
|
131 |
|
132 |
We now show you how to load and use our model using HuggingFace `pipline()`.
|
133 |
### CPU
|
|
|
16 |
|
17 |
# LaMini-FLAN-T5-Small
|
18 |
|
19 |
+
This model is one of our LaMini model series in paper "[LaMini: Distilling Knowledge from Large Language Models]()". This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on [LaMini dataset]() that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository]().
|
20 |
|
21 |
## Training Procedure
|
22 |
We initialize with [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) and fine-tune it on our [LaMini dataset](). Its total number of parameters is 61M.
|
|
|
127 |
## Use
|
128 |
|
129 |
### Intended use
|
130 |
+
We recommend to use model to reponse to humen instructions wrote in natural language.
|
131 |
|
132 |
We now show you how to load and use our model using HuggingFace `pipline()`.
|
133 |
### CPU
|