chiyuzhang commited on
Commit
933dd29
1 Parent(s): 6cce9c0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -20,9 +20,11 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  # LaMini-FLAN-T5-77M
22
 
23
- This model is one of our LaMini model series in paper "[LaMini: Distilling Knowledge from Large Language Models]()". This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on [LaMini dataset]() that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository]().
 
24
 
25
- You can view other LaMini model series as follow. Note that not all models are performing as well. Models with are those with the best overall performance given their size/architecture. More details can be seen in our paper.
 
26
 
27
  <table>
28
  <thead>
 
20
 
21
  # LaMini-FLAN-T5-77M
22
 
23
+ [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)
24
+ [![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE)
25
 
26
+ This model is one of our LaMini model series in paper "[LaMini: A Diverse Herd of Distilled Models from Large-Scale Instructions]()". This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on [LaMini dataset]() that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini/).
27
+ You can view other LaMini model series as follow. Note that not all models are performing as well. More details can be seen in our paper.
28
 
29
  <table>
30
  <thead>