File size: 1,455 Bytes
771453b e4c7f59 771453b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
---
base_model:
- meta-llama/Llama-3.1-70B-Instruct
pipeline_tag: summarization
---
<div align="center">
<b style="font-size: 40px;">SummLlama3.1-70B</b>
</div>
Are you looking for a summarizer that can generate more **human-preferred summaries** across multiple domains?
Our **SummLlama3.1-70B** could be exactly what you need!
SummLlama3.1-70B is initialized from Llama3.1-70B-Instruct, with additional training using Direct Preference Optimization (DPO) based on large-scale (over 100K) summarization feedback.
The feedback encompasses a wide range of input documents, from short to lengthy texts, including both dialogue and non-dialogue formats, and spans across seven distinct domains:
- Four non-dialouge domains: News, Lifestyle, Report, Medical
- Three dialogue domains: Daily Life, Interview, Meeting
This is automated evaluation results:
| **Config.** | **Faithfulness** | **Completeness** | **Conciseness** | **Average** |
|--------------------|------------|-----------|-----------|----------|
| Llama3-70B-Instruct | 0.931 | 0.596 | 0.487 | 0.671 |
| Llama3.1-70B-Instruct | 0.927 | 0.624 | 0.458 | 0.670 |
| GPT-4o | 0.940 | 0.657 | 0.437 | 0.678 |
| SummLlama3.1-70B | 0.942 | 0.637 | 0.909 | 0.829 |
Please refer to [our paper](https://arxiv.org/abs/2410.13116) to catch up how to exploit LLM-generated feedback in the context of text summarization. |