MikaSie commited on
Commit
4466e71
1 Parent(s): d447712

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +89 -60
README.md CHANGED
@@ -1,81 +1,110 @@
1
  ---
2
- license: apache-2.0
3
- base_model: t5-large
4
  tags:
5
- - generated_from_trainer
6
- datasets:
7
- - eur-lex-sum
 
 
 
 
8
  model-index:
9
- - name: T5_no_extraction_V1
10
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
- should probably proofread and complete it, then remove this comment. -->
15
 
16
- # T5_no_extraction_V1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
- This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the eur-lex-sum dataset.
19
- It achieves the following results on the evaluation set:
20
- - Loss: 2.4511
 
 
 
 
 
 
 
 
 
21
 
22
- ## Model description
23
 
24
- More information needed
25
 
26
- ## Intended uses & limitations
27
 
28
- More information needed
29
 
30
- ## Training and evaluation data
 
31
 
32
- More information needed
 
 
 
 
 
 
33
 
34
- ## Training procedure
35
 
36
- ### Training hyperparameters
37
 
38
- The following hyperparameters were used during training:
39
- - learning_rate: 5e-05
40
- - train_batch_size: 16
41
- - eval_batch_size: 16
42
- - seed: 42
43
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
- - lr_scheduler_type: linear
45
- - lr_scheduler_warmup_ratio: 0.1
46
- - num_epochs: 40
47
 
48
- ### Training results
 
49
 
50
- | Training Loss | Epoch | Step | Validation Loss |
51
- |:-------------:|:-----:|:----:|:---------------:|
52
- | 8.3854 | 1.0 | 69 | 4.0580 |
53
- | 3.6609 | 2.0 | 138 | 3.1814 |
54
- | 3.1288 | 3.0 | 207 | 2.8681 |
55
- | 2.8599 | 4.0 | 276 | 2.7148 |
56
- | 2.6823 | 5.0 | 345 | 2.6240 |
57
- | 2.5613 | 6.0 | 414 | 2.5674 |
58
- | 2.4737 | 7.0 | 483 | 2.5317 |
59
- | 2.4 | 8.0 | 552 | 2.5075 |
60
- | 2.3408 | 9.0 | 621 | 2.4859 |
61
- | 2.2871 | 10.0 | 690 | 2.4718 |
62
- | 2.242 | 11.0 | 759 | 2.4634 |
63
- | 2.202 | 12.0 | 828 | 2.4565 |
64
- | 2.1608 | 13.0 | 897 | 2.4491 |
65
- | 2.1254 | 14.0 | 966 | 2.4462 |
66
- | 2.0915 | 15.0 | 1035 | 2.4464 |
67
- | 2.0628 | 16.0 | 1104 | 2.4454 |
68
- | 2.03 | 17.0 | 1173 | 2.4398 |
69
- | 2.0056 | 18.0 | 1242 | 2.4419 |
70
- | 1.9766 | 19.0 | 1311 | 2.4435 |
71
- | 1.9554 | 20.0 | 1380 | 2.4457 |
72
- | 1.9353 | 21.0 | 1449 | 2.4470 |
73
- | 1.9126 | 22.0 | 1518 | 2.4511 |
74
 
 
 
75
 
76
- ### Framework versions
77
 
78
- - Transformers 4.40.1
79
- - Pytorch 2.2.1+cu121
80
- - Datasets 2.17.1
81
- - Tokenizers 0.19.1
 
 
1
  ---
2
+ language: en
 
3
  tags:
4
+ - summarization
5
+ - abstractive
6
+ - hybrid
7
+ - multistep
8
+ datasets: dennlinger/eur-lex-sum
9
+ pipeline_tag: summarization
10
+ base_model: T5
11
  model-index:
12
+ - name: BART
13
+ results:
14
+ - task:
15
+ type: summarization
16
+ name: Long, Legal Document Summarization
17
+ dataset:
18
+ name: eur-lex-sum
19
+ type: dennlinger/eur-lex-sum
20
+ metrics:
21
+ - type: ROUGE-1
22
+ value: 0.3033175377014679
23
+ - type: ROUGE-2
24
+ value: 0.1240611664538099
25
+ - type: ROUGE-L
26
+ value: 0.1994150614076662
27
+ - type: BERTScore
28
+ value: 0.8442945966507469
29
+ - type: BARTScore
30
+ value: -1.512467493558063
31
+ - type: BLANC
32
+ value: 0.1426287498004355
33
  ---
34
 
35
+ # Model Card for T5_no_extraction_V1
 
36
 
37
+ ## Model Details
38
+ ---
39
+ ### Model Description
40
+
41
+ This model is a fine-tuned version of T5. The research involves a multi-step summarization approach to long, legal documents. Many decisions in the renewables energy space are heavily dependent on regulations. But these regulations are often long and complicated. The proposed architecture first uses one or more extractive summarization steps to compress the source text, before the final summary is created by the abstractive summarization model. This fine-tuned abstractive model has been trained on a dataset, pre-processed through extractive summarization by No extractive model with No ratio ratio. The research has used multiple extractive-abstractive model combinations, which can be found on https://huggingface.co/MikaSie. To obtain optimal results, feed the model an extractive summary as input as it was designed this way!
42
+
43
+ The dataset used by this model is the [EUR-lex-sum](https://huggingface.co/datasets/dennlinger/eur-lex-sum) dataset. The evaluation metrics can be found in the metadata of this model card.
44
+ This paper was introduced by the master thesis of Mika Sie at the University Utrecht in collaboration with Power2x. More information can be found in PAPER_LINK.
45
+
46
+ - **Developed by:** Mika Sie
47
+ - **Funded by:** University Utrecht & Power2X
48
+ - **Language (NLP):** English
49
+ - **Finetuned from model:** T5
50
+
51
+
52
+ ### Model Sources
53
+
54
+ - **Repository**: https://github.com/MikaSie/Thesis
55
+ - **Paper**: PAPER_LINK
56
+ - **Streamlit demo**: STREAMLIT_LINK
57
 
58
+ ## Uses
59
+ ---
60
+ ### Direct Use
61
+
62
+ This model can be directly used for summarizing long, legal documents. However, it is recommended to first use an extractive summarization tool, such as No extractive model, to compress the source text before feeding it to this model. This model has been specifically designed to work with extractive summaries.
63
+ An example using the Huggingface pipeline could be:
64
+
65
+ ```python
66
+ pip install bert-extractive-summarizer
67
+
68
+ from summarizer import Summarizer
69
+ from transformers import pipeline
70
 
71
+ extractive_model = Summarizer()
72
 
73
+ text = 'Original document text to be summarized'
74
 
75
+ extractive_summary = Summarizer(text)
76
 
77
+ abstractive_model = pipeline('summarization', model = 'MikaSie/T5_no_extraction_V1', tokenizer = 'MikaSie/T5_no_extraction_V1')
78
 
79
+ result = pipeline(extractive_summary)
80
+ ```
81
 
82
+ But more information of implementation can be found in the Thesis report.
83
+ ### Out-of-Scope Use
84
+
85
+ Using this model without an extractive summarization step may not yield optimal results. It is recommended to follow the proposed multi-step summarization approach outlined in the model description for best performance.
86
+
87
+ ## Bias, Risks, and Limitations
88
+ ---
89
 
90
+ ### Bias
91
 
92
+ As with any language model, this model may inherit biases present in the training data. It is important to be aware of potential biases in the source text and to critically evaluate the generated summaries.
93
 
94
+ ### Risks
 
 
 
 
 
 
 
 
95
 
96
+ - The model may not always generate accurate or comprehensive summaries, especially for complex legal documents.
97
+ - The model may not generate truthful information.
98
 
99
+ ### Limitations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
100
 
101
+ - The model may produce summaries that are overly abstractive or fail to capture important details.
102
+ - The model's performance may vary depending on the quality and relevance of the extractive summaries used as input.
103
 
104
+ ### Recommendations
105
 
106
+ - Carefully review and validate the generated summaries before relying on them for critical tasks.
107
+ - Consider using the model in conjunction with human review or other validation mechanisms to ensure the accuracy and completeness of the summaries.
108
+ - Experiment with different extractive summarization models or techniques to find the most suitable input for the abstractive model.
109
+ - Provide feedback and contribute to the ongoing research and development of the model to help improve its performance and address its limitations.
110
+ - Any actions taken based on this content are at your own risk.