File size: 1,921 Bytes
dd04dfc
 
6705ca6
 
dd04dfc
 
f30c6d6
 
 
40e3877
 
 
dd04dfc
 
 
 
40e3877
 
 
 
c006c58
304a246
c006c58
 
 
40e3877
 
 
 
 
 
 
 
 
 
 
 
dd04dfc
6705ca6
 
dd04dfc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40e3877
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
library_name: transformers
language:
- en
license: apache-2.0
base_model: google/flan-t5-xl
datasets:
- pszemraj/summary-map-reduce-v1
pipeline_tag: text2text-generation
tags:
- map-reduce
- summarization
---

# flan-t5-xl-summary-map-reduce-1024

A larger t2t model trained to complete the "reduce" step (_consolidation step_) of map-reduce summarization. 

## About 

> [!TIP]
> Refer to [this wiki page](https://github.com/pszemraj/textsum/wiki/consolidating-summaries) or the [smaller BART model card](https://hf.co/pszemraj/bart-large-summary-map-reduce) for explanations and usage examples. 


Comparatively, this model seems to

- produce more eloquent final reduced summaries
- more "gullible"/sensitive to noise in the input summaries
  - i.e. a hallucinated one-off term/name/entity is likely to be mentioned/appear in the reduced summary 
- agnostic to whitespace in input (_by definition, since the t5 tokenizer normalizes whitespace_)

Therefore, it's recommended to compare sample outputs of this model and [the BART version](https://hf.co/pszemraj/bart-large-summary-map-reduce) on your data to see which is better for your use case.

## Details 

This model is a fine-tuned version of [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl) on the pszemraj/summary-map-reduce-v1 dataset at 1024 context length in/out.

It achieves the following results on the evaluation set:
- Loss: 0.6039
- Num Input Tokens Seen: 7138765

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 17868
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 2.0