soujanyaporia
commited on
Commit
โข
963dd3f
1
Parent(s):
cf2363e
Update README.md
Browse files
README.md
CHANGED
@@ -6,6 +6,9 @@ datasets:
|
|
6 |
|
7 |
## ๐ฎ ๐ฆ Flan-Alpaca: Instruction Tuning from Humans and Machines
|
8 |
|
|
|
|
|
|
|
9 |
Our [repository](https://github.com/declare-lab/flan-alpaca) contains code for extending the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
|
10 |
synthetic instruction tuning to existing instruction-tuned models such as [Flan-T5](https://arxiv.org/abs/2210.11416).
|
11 |
The pretrained models and demos are available on HuggingFace ๐ค :
|
|
|
6 |
|
7 |
## ๐ฎ ๐ฆ Flan-Alpaca: Instruction Tuning from Humans and Machines
|
8 |
|
9 |
+
๐ฃ Curious to know the performance of ๐ฎ ๐ฆ **Flan-Alpaca** on large-scale LLM evaluation benchmark, **InstructEval**? Read our paper [https://arxiv.org/pdf/2306.04757.pdf](https://arxiv.org/pdf/2306.04757.pdf). We evaluated more than 10 open-source instruction-tuned LLMs belonging to various LLM families including Pythia, LLaMA, T5, UL2, OPT, and Mosaic. Codes and datasets: [https://github.com/declare-lab/instruct-eval](https://github.com/declare-lab/instruct-eval)
|
10 |
+
|
11 |
+
|
12 |
Our [repository](https://github.com/declare-lab/flan-alpaca) contains code for extending the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
|
13 |
synthetic instruction tuning to existing instruction-tuned models such as [Flan-T5](https://arxiv.org/abs/2210.11416).
|
14 |
The pretrained models and demos are available on HuggingFace ๐ค :
|