Update README.md
Browse files
README.md
CHANGED
@@ -48,6 +48,24 @@ LongForm-**OPT-2.7B**: https://huggingface.co/akoksal/LongForm-OPT-2.7B
|
|
48 |
|
49 |
LongForm-**OPT-6.7B**: https://huggingface.co/akoksal/LongForm-OPT-6.7B
|
50 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
|
52 |
## Evaluation
|
53 |
We provide in-depth evaluation of LongForm models and baselines in the paper. We present the METEOR scores of models in out-of-domain datasets. In all tasks, Recipe Generation (RGen), long-form question answering (ELI5), short story generation (WritingPrompts/WP), LongForm models outperform prior instruction-tuned models.
|
|
|
48 |
|
49 |
LongForm-**OPT-6.7B**: https://huggingface.co/akoksal/LongForm-OPT-6.7B
|
50 |
|
51 |
+
## How to Load
|
52 |
+
```python
|
53 |
+
import torch
|
54 |
+
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
55 |
+
model = AutoModelForSeq2SeqLM.from_pretrained("akoksal/LongForm-T5-XL")
|
56 |
+
tokenizer = AutoTokenizer.from_pretrained("akoksal/LongForm-T5-XL")
|
57 |
+
|
58 |
+
instruction = "Write an essay about meditation."
|
59 |
+
torch.manual_seed(42)
|
60 |
+
input_ids = tokenizer(instruction, return_tensors="pt").input_ids
|
61 |
+
target_ids = model.generate(input_ids, do_sample=True, max_new_tokens=50, top_p=0.9)
|
62 |
+
tokenizer.decode(target_ids[0], skip_special_tokens=True)
|
63 |
+
# Output:
|
64 |
+
# > Meditation is an ancient, spiritual practice. Meditation was first\
|
65 |
+
# practiced as early as 3000 BC by Indians. Meditation has been practiced\
|
66 |
+
# by people for thousands of years. People meditate in order to become more\
|
67 |
+
# present in their life. Meditation is
|
68 |
+
```
|
69 |
|
70 |
## Evaluation
|
71 |
We provide in-depth evaluation of LongForm models and baselines in the paper. We present the METEOR scores of models in out-of-domain datasets. In all tasks, Recipe Generation (RGen), long-form question answering (ELI5), short story generation (WritingPrompts/WP), LongForm models outperform prior instruction-tuned models.
|