|
--- |
|
language: |
|
- en |
|
tags: |
|
- summarization |
|
- led |
|
- summary |
|
- longformer |
|
- booksum |
|
- long-document |
|
- long-form |
|
license: apache-2.0 |
|
datasets: |
|
- kmfoda/booksum |
|
metrics: |
|
- rouge |
|
widget: |
|
- text: large earthquakes along a given fault segment do not occur at random intervals |
|
because it takes time to accumulate the strain energy for the rupture. The rates |
|
at which tectonic plates move and accumulate strain at their boundaries are approximately |
|
uniform. Therefore, in first approximation, one may expect that large ruptures |
|
of the same fault segment will occur at approximately constant time intervals. |
|
If subsequent main shocks have different amounts of slip across the fault, then |
|
the recurrence time may vary, and the basic idea of periodic mainshocks must be |
|
modified. For great plate boundary ruptures the length and slip often vary by |
|
a factor of 2. Along the southern segment of the San Andreas fault the recurrence |
|
interval is 145 years with variations of several decades. The smaller the standard |
|
deviation of the average recurrence interval, the more specific could be the long |
|
term prediction of a future mainshock. |
|
example_title: earthquakes |
|
- text: " A typical feed-forward neural field algorithm. Spatiotemporal coordinates\ |
|
\ are fed into a neural network that predicts values in the reconstructed domain.\ |
|
\ Then, this domain is mapped to the sensor domain where sensor measurements are\ |
|
\ available as supervision. Class and Section Problems Addressed Generalization\ |
|
\ (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid\ |
|
\ Representations (Section 3) Computation & memory efficiency, representation\ |
|
\ capacity, editability: Forward Maps (Section 4) Inverse problems Network Architecture\ |
|
\ (Section 5) Spectral bias, integration & derivatives. Manipulating Neural Fields\ |
|
\ (Section 6) Edit ability, constraints, regularization. Table 2: The five classes\ |
|
\ of techniques in the neural field toolbox each addresses problems that arise\ |
|
\ in learning, inference, and control. (Section 3). We can supervise reconstruction\ |
|
\ via differentiable forward maps that transform Or project our domain (e.g, 3D\ |
|
\ reconstruction via 2D images; Section 4) With appropriate network architecture\ |
|
\ choices, we can overcome neural network spectral biases (blurriness) and efficiently\ |
|
\ compute derivatives and integrals (Section 5). Finally, we can manipulate neural\ |
|
\ fields to add constraints and regularizations, and to achieve editable representations\ |
|
\ (Section 6). Collectively, these classes constitute a 'toolbox' of techniques\ |
|
\ to help solve problems with neural fields There are three components in a conditional\ |
|
\ neural field: (1) An encoder or inference function \u20AC that outputs the conditioning\ |
|
\ latent variable 2 given an observation 0 E(0) =2. 2 is typically a low-dimensional\ |
|
\ vector, and is often referred to aS a latent code Or feature code_ (2) A mapping\ |
|
\ function 4 between Z and neural field parameters O: Y(z) = O; (3) The neural\ |
|
\ field itself $. The encoder \u20AC finds the most probable z given the observations\ |
|
\ O: argmaxz P(2/0). The decoder maximizes the inverse conditional probability\ |
|
\ to find the most probable 0 given Z: arg- max P(Olz). We discuss different encoding\ |
|
\ schemes with different optimality guarantees (Section 2.1.1), both global and\ |
|
\ local conditioning (Section 2.1.2), and different mapping functions Y (Section\ |
|
\ 2.1.3) 2. Generalization Suppose we wish to estimate a plausible 3D surface\ |
|
\ shape given a partial or noisy point cloud. We need a suitable prior over the\ |
|
\ sur- face in its reconstruction domain to generalize to the partial observations.\ |
|
\ A neural network expresses a prior via the function space of its architecture\ |
|
\ and parameters 0, and generalization is influenced by the inductive bias of\ |
|
\ this function space (Section 5)." |
|
example_title: scientific paper |
|
- text: ' the big variety of data coming from diverse sources is one of the key properties |
|
of the big data phenomenon. It is, therefore, beneficial to understand how data |
|
is generated in various environments and scenarios, before looking at what should |
|
be done with this data and how to design the best possible architecture to accomplish |
|
this The evolution of IT architectures, described in Chapter 2, means that the |
|
data is no longer processed by a few big monolith systems, but rather by a group |
|
of services In parallel to the processing layer, the underlying data storage has |
|
also changed and became more distributed This, in turn, required a significant |
|
paradigm shift as the traditional approach to transactions (ACID) could no longer |
|
be supported. On top of this, cloud computing is becoming a major approach with |
|
the benefits of reducing costs and providing on-demand scalability but at the |
|
same time introducing concerns about privacy, data ownership, etc In the meantime |
|
the Internet continues its exponential growth: Every day both structured and unstructured |
|
data is published and available for processing: To achieve competitive advantage |
|
companies have to relate their corporate resources to external services, e.g. |
|
financial markets, weather forecasts, social media, etc While several of the sites |
|
provide some sort of API to access the data in a more orderly fashion; countless |
|
sources require advanced web mining and Natural Language Processing (NLP) processing |
|
techniques: Advances in science push researchers to construct new instruments |
|
for observing the universe O conducting experiments to understand even better |
|
the laws of physics and other domains. Every year humans have at their disposal |
|
new telescopes, space probes, particle accelerators, etc These instruments generate |
|
huge streams of data, which need to be stored and analyzed. The constant drive |
|
for efficiency in the industry motivates the introduction of new automation techniques |
|
and process optimization: This could not be done without analyzing the precise |
|
data that describe these processes. As more and more human tasks are automated, |
|
machines provide rich data sets, which can be analyzed in real-time to drive efficiency |
|
to new levels. Finally, it is now evident that the growth of the Internet of Things |
|
is becoming a major source of data. More and more of the devices are equipped |
|
with significant computational power and can generate a continuous data stream |
|
from their sensors. In the subsequent sections of this chapter, we will look at |
|
the domains described above to see what they generate in terms of data sets. We |
|
will compare the volumes but will also look at what is characteristic and important |
|
from their respective points of view. 3.1 The Internet is undoubtedly the largest |
|
database ever created by humans. While several well described; cleaned, and structured |
|
data sets have been made available through this medium, most of the resources |
|
are of an ambiguous, unstructured, incomplete or even erroneous nature. Still, |
|
several examples in the areas such as opinion mining, social media analysis, e-governance, |
|
etc, clearly show the potential lying in these resources. Those who can successfully |
|
mine and interpret the Internet data can gain unique insight and competitive advantage |
|
in their business An important area of data analytics on the edge of corporate |
|
IT and the Internet is Web Analytics.' |
|
example_title: data science textbook |
|
- text: "Transformer-based models have shown to be very useful for many NLP tasks.\ |
|
\ However, a major limitation of transformers-based models is its O(n^2)O(n 2)\ |
|
\ time & memory complexity (where nn is sequence length). Hence, it's computationally\ |
|
\ very expensive to apply transformer-based models on long sequences n > 512n>512.\ |
|
\ Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention\ |
|
\ try to remedy this problem by approximating the full attention matrix. You can\ |
|
\ checkout \U0001F917's recent blog post in case you are unfamiliar with these\ |
|
\ models.\nBigBird (introduced in paper) is one of such recent models to address\ |
|
\ this issue. BigBird relies on block sparse attention instead of normal attention\ |
|
\ (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a\ |
|
\ much lower computational cost compared to BERT. It has achieved SOTA on various\ |
|
\ tasks involving very long sequences such as long documents summarization, question-answering\ |
|
\ with long contexts.\nBigBird RoBERTa-like model is now available in \U0001F917\ |
|
Transformers. The goal of this post is to give the reader an in-depth understanding\ |
|
\ of big bird implementation & ease one's life in using BigBird with \U0001F917\ |
|
Transformers. But, before going into more depth, it is important to remember that\ |
|
\ the BigBird's attention is an approximation of BERT's full attention and therefore\ |
|
\ does not strive to be better than BERT's full attention, but rather to be more\ |
|
\ efficient. It simply allows to apply transformer-based models to much longer\ |
|
\ sequences since BERT's quadratic memory requirement quickly becomes unbearable.\ |
|
\ Simply put, if we would have \u221E compute & \u221E time, BERT's attention\ |
|
\ would be preferred over block sparse attention (which we are going to discuss\ |
|
\ in this post).\nIf you wonder why we need more compute when working with longer\ |
|
\ sequences, this blog post is just right for you!\nSome of the main questions\ |
|
\ one might have when working with standard BERT-like attention include:\nDo all\ |
|
\ tokens really have to attend to all other tokens? Why not compute attention\ |
|
\ only over important tokens? How to decide what tokens are important? How to\ |
|
\ attend to just a few tokens in a very efficient way? In this blog post, we will\ |
|
\ try to answer those questions.\nWhat tokens should be attended to? We will give\ |
|
\ a practical example of how attention works by considering the sentence 'BigBird\ |
|
\ is now available in HuggingFace for extractive question answering'. In BERT-like\ |
|
\ attention, every word would simply attend to all other tokens.\nLet's think\ |
|
\ about a sensible choice of key tokens that a queried token actually only should\ |
|
\ attend to by writing some pseudo-code. Will will assume that the token available\ |
|
\ is queried and build a sensible list of key tokens to attend to.\n>>> # let's\ |
|
\ consider following sentence as an example >>> example = ['BigBird', 'is', 'now',\ |
|
\ 'available', 'in', 'HuggingFace', 'for', 'extractive', 'question', 'answering']\n\ |
|
>>> # further let's assume, we're trying to understand the representation of 'available'\ |
|
\ i.e. >>> query_token = 'available' >>> # We will initialize an empty `set` and\ |
|
\ fill up the tokens of our interest as we proceed in this section. >>> key_tokens\ |
|
\ = [] # => currently 'available' token doesn't have anything to attend Nearby\ |
|
\ tokens should be important because, in a sentence (sequence of words), the current\ |
|
\ word is highly dependent on neighboring past & future tokens. This intuition\ |
|
\ is the idea behind the concept of sliding attention." |
|
example_title: bigbird blog intro |
|
- text: 'The majority of available text summarization datasets include short-form |
|
source documents that lack long-range causal and temporal dependencies, and often |
|
contain strong layout and stylistic biases. While relevant, such datasets will |
|
offer limited challenges for future generations of text summarization systems. |
|
We address these issues by introducing BookSum, a collection of datasets for long-form |
|
narrative summarization. Our dataset covers source documents from the literature |
|
domain, such as novels, plays and stories, and includes highly abstractive, human |
|
written summaries on three levels of granularity of increasing difficulty: paragraph-, |
|
chapter-, and book-level. The domain and structure of our dataset poses a unique |
|
set of challenges for summarization systems, which include: processing very long |
|
documents, non-trivial causal and temporal dependencies, and rich discourse structures. |
|
To facilitate future work, we trained and evaluated multiple extractive and abstractive |
|
summarization models as baselines for our dataset.' |
|
example_title: BookSum Abstract |
|
inference: |
|
parameters: |
|
max_length: 64 |
|
min_length: 8 |
|
no_repeat_ngram_size: 3 |
|
early_stopping: true |
|
repetition_penalty: 3.5 |
|
length_penalty: 0.3 |
|
encoder_no_repeat_ngram_size: 3 |
|
num_beams: 4 |
|
model-index: |
|
- name: pszemraj/led-large-book-summary |
|
results: |
|
- task: |
|
type: summarization |
|
name: Summarization |
|
dataset: |
|
name: kmfoda/booksum |
|
type: kmfoda/booksum |
|
config: kmfoda--booksum |
|
split: test |
|
metrics: |
|
- name: ROUGE-1 |
|
type: rouge |
|
value: 31.7308 |
|
verified: true |
|
- name: ROUGE-2 |
|
type: rouge |
|
value: 5.3311 |
|
verified: true |
|
- name: ROUGE-L |
|
type: rouge |
|
value: 16.1465 |
|
verified: true |
|
- name: ROUGE-LSUM |
|
type: rouge |
|
value: 29.0883 |
|
verified: true |
|
- name: loss |
|
type: loss |
|
value: 4.815707206726074 |
|
verified: true |
|
- name: gen_len |
|
type: gen_len |
|
value: 154.9036 |
|
verified: true |
|
- task: |
|
type: summarization |
|
name: Summarization |
|
dataset: |
|
name: samsum |
|
type: samsum |
|
config: samsum |
|
split: test |
|
metrics: |
|
- name: ROUGE-1 |
|
type: rouge |
|
value: 33.4484 |
|
verified: true |
|
- name: ROUGE-2 |
|
type: rouge |
|
value: 10.4249 |
|
verified: true |
|
- name: ROUGE-L |
|
type: rouge |
|
value: 24.5802 |
|
verified: true |
|
- name: ROUGE-LSUM |
|
type: rouge |
|
value: 29.8226 |
|
verified: true |
|
- name: loss |
|
type: loss |
|
value: 4.176078796386719 |
|
verified: true |
|
- name: gen_len |
|
type: gen_len |
|
value: 65.4005 |
|
verified: true |
|
- task: |
|
type: summarization |
|
name: Summarization |
|
dataset: |
|
name: billsum |
|
type: billsum |
|
config: default |
|
split: test |
|
metrics: |
|
- name: ROUGE-1 |
|
type: rouge |
|
value: 40.5843 |
|
verified: true |
|
- name: ROUGE-2 |
|
type: rouge |
|
value: 17.3401 |
|
verified: true |
|
- name: ROUGE-L |
|
type: rouge |
|
value: 25.1256 |
|
verified: true |
|
- name: ROUGE-LSUM |
|
type: rouge |
|
value: 34.6619 |
|
verified: true |
|
- name: loss |
|
type: loss |
|
value: 4.792657375335693 |
|
verified: true |
|
- name: gen_len |
|
type: gen_len |
|
value: 163.9394 |
|
verified: true |
|
--- |
|
|
|
# Longformer Encoder-Decoder (LED) fine-tuned on Booksum |
|
|
|
demo: |
|
|
|
[![colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/pszemraj/d9a0495861776168fd5cdcd7731bc4ee/example-long-t5-tglobal-base-16384-book-summary.ipynb) |
|
|
|
- A fine-tuned version of [allenai/led-large-16384](https://huggingface.co/allenai/led-large-16384) on the BookSum dataset. |
|
- Goal: a model that can generalize well and is useful in summarizing long text in academic and daily usage. |
|
- works well on lots of text and can handle 16384 tokens/batch (_if you have the GPU memory to handle that_) |
|
|
|
> Note: the API is set to generate a max of 64 tokens for runtime reasons, so the summaries may be truncated (depending on length of input text). For best results use python as below. |
|
|
|
--- |
|
|
|
# Usage - Basic |
|
|
|
- use `encoder_no_repeat_ngram_size=3` when calling the pipeline object to improve summary quality. |
|
- this forces the model to use new vocabulary and create an abstractive summary, otherwise it may compile the best _extractive_ summary from the input provided. |
|
|
|
Load the model into a pipeline object: |
|
|
|
```python |
|
import torch |
|
from transformers import pipeline |
|
|
|
hf_name = 'pszemraj/led-large-book-summary' |
|
|
|
summarizer = pipeline( |
|
"summarization", |
|
hf_name, |
|
device=0 if torch.cuda.is_available() else -1, |
|
) |
|
``` |
|
|
|
- put words into the pipeline object: |
|
|
|
```python |
|
wall_of_text = "your words here" |
|
|
|
result = summarizer( |
|
wall_of_text, |
|
min_length=16, |
|
max_length=256, |
|
no_repeat_ngram_size=3, |
|
encoder_no_repeat_ngram_size =3, |
|
repetition_penalty=3.5, |
|
num_beams=4, |
|
early_stopping=True, |
|
) |
|
|
|
|
|
``` |
|
|
|
|
|
**Important:** To generate the best quality summaries, you should use the global attention mask when decoding, as demonstrated in [this community notebook here](https://colab.research.google.com/drive/12INTTR6n64TzS4RrXZxMSXfrOd9Xzamo?usp=sharing), see the definition of `generate_answer(batch)`. |
|
|
|
If having computing constraints, try the base version [`pszemraj/led-base-book-summary`](https://huggingface.co/pszemraj/led-base-book-summary) |
|
- all the parameters for generation on the API here are the same as [the base model](https://huggingface.co/pszemraj/led-base-book-summary) for easy comparison between versions. |
|
|
|
## Training and evaluation data |
|
|
|
- the [booksum](https://arxiv.org/abs/2105.08209) dataset |
|
- During training, the input text was the text of the `chapter`, and the output was `summary_text` |
|
- Eval results can be found [here](https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-79c1c0d8-10905463) with metrics on the sidebar. |
|
|
|
## Training procedure |
|
|
|
- Training completed on the BookSum dataset for 13 total epochs |
|
- **The final four epochs combined the training and validation sets as 'train' in an effort to increase generalization.** |
|
|
|
### Training hyperparameters |
|
|
|
#### Initial Three Epochs |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 5e-05 |
|
- train_batch_size: 1 |
|
- eval_batch_size: 1 |
|
- seed: 42 |
|
- distributed_type: multi-GPU |
|
- gradient_accumulation_steps: 4 |
|
- total_train_batch_size: 4 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 3 |
|
|
|
#### In-between Epochs |
|
|
|
Unfortunately, don't have all records on-hand for middle epochs, the following should be representative: |
|
|
|
- learning_rate: 4e-05 |
|
- train_batch_size: 2 |
|
- eval_batch_size: 2 |
|
- seed: 42 |
|
- distributed_type: multi-GPU |
|
- gradient_accumulation_steps: 16 |
|
- total_train_batch_size: 32 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: cosine |
|
- lr_scheduler_warmup_ratio: 0.05 |
|
- num_epochs: 6 (in addition to prior model) |
|
|
|
#### Final Two Epochs |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 2e-05 |
|
- train_batch_size: 1 |
|
- eval_batch_size: 1 |
|
- seed: 42 |
|
- distributed_type: multi-GPU |
|
- gradient_accumulation_steps: 16 |
|
- total_train_batch_size: 16 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: cosine |
|
- lr_scheduler_warmup_ratio: 0.03 |
|
- num_epochs: 2 (in addition to prior model) |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.19.2 |
|
- Pytorch 1.11.0+cu113 |
|
- Datasets 2.2.2 |
|
- Tokenizers 0.12.1 |