Edit model card

Introduction

Allenai's Longformer Encoder-Decoder (LED).

As described in Longformer: The Long-Document Transformer by Iz Beltagy, Matthew E. Peters, Arman Cohan, led-large-16384 was initialized from bart-large since both models share the exact same architecture. To be able to process 16K tokens, bart-large's position embedding matrix was simply copied 16 times.

This model is especially interesting for long-range summarization and question answering.

Fine-tuning for down-stream task

This notebook shows how led-large-16384 can effectively be fine-tuned on a downstream task.

Downloads last month
1,871
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for allenai/led-large-16384

Adapters
10 models
Finetunes
3 models

Spaces using allenai/led-large-16384 7