akoksal commited on
Commit
3768a95
1 Parent(s): 883a4c4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -76,12 +76,12 @@ We provide in-depth evaluation of LongForm models and baselines in the paper. We
76
  | **Flan-T5** | 10.6 | 20.9* | 3.5 | 7.4 |
77
  | **Alpaca-LLaMA-7B** | 14.6 | 19.5 | 12.5 | 11.8 |
78
  | **OPT-30B** | 11.1 | 18.6 | 12.2 | 2.6 |
79
- | **[LongForm-T5-XL](https://huggingface.co/akoksal/LongForm-T5-XL)** | 16.3 | 20.2 | 18.3 | 10.6 |
80
- | **[LongForm-OPT-2.7B](https://huggingface.co/akoksal/LongForm-OPT-2.7B)** | 17.8 | 15.5 | 17.9 | **19.9** |
81
- | **[LongForm-OPT-6.7B](https://huggingface.co/akoksal/LongForm-OPT-6.7B)** | 17.7 | 16.9 | 17.2 | 19.0 |
82
- | **LongForm-LLaMA-7B**‡ | **19.7** | **21.7** | **18.6** | 18.9 |
83
 
84
- ‡: We cannot release LongForm-LLaMA-7B publicly due to restrictions of LLaMA models.
85
 
86
  ## Limitations
87
  The LongForm dataset and models mainly focus on long text generation and have limitations regarding structured prediction tasks in NLP. Additionally, we observe that LongForm models may present hallucination problems similar to those found in LLMs.
 
76
  | **Flan-T5** | 10.6 | 20.9* | 3.5 | 7.4 |
77
  | **Alpaca-LLaMA-7B** | 14.6 | 19.5 | 12.5 | 11.8 |
78
  | **OPT-30B** | 11.1 | 18.6 | 12.2 | 2.6 |
79
+ | [**LongForm-T5-XL**](https://huggingface.co/akoksal/LongForm-T5-XL) | 16.3 | 20.2 | 18.3 | 10.6 |
80
+ | [**LongForm-OPT-2.7B**](https://huggingface.co/akoksal/LongForm-OPT-2.7B) | 17.8 | 15.5 | 17.9 | **19.9** |
81
+ | [**LongForm-OPT-6.7B**](https://huggingface.co/akoksal/LongForm-OPT-6.7B) | 17.7 | 16.9 | 17.2 | 19.0 |
82
+ | [**LongForm-LLaMA-7B**](https://huggingface.co/akoksal/LongForm-LLaMA-7B-diff)‡ | **19.7** | **21.7** | **18.6** | 18.9 |
83
 
84
+ ‡: We can just release the difference between LongForm-LLaMA-7B and pretrained LLaMA-7B publicly due to restrictions of LLaMA models.
85
 
86
  ## Limitations
87
  The LongForm dataset and models mainly focus on long text generation and have limitations regarding structured prediction tasks in NLP. Additionally, we observe that LongForm models may present hallucination problems similar to those found in LLMs.