Hwanjun commited on
Commit
771453b
1 Parent(s): 20efaf8

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - meta-llama/Llama-3.1-70B-Instruct
4
+ pipeline_tag: summarization
5
+ ---
6
+
7
+ <div align="center">
8
+ <b style="font-size: 40px;">SummLlama3.1-70B</b>
9
+ </div>
10
+
11
+ Are you looking for a summarizer that can generate more **human-preferred summaries** across multiple domains?
12
+
13
+ Our **SummLlama3.1-70B** could be exactly what you need!
14
+
15
+ SummLlama3.1-70B is initialized from Llama3.1-70B-Instruct, with additional training using Direct Preference Optimization (DPO) based on large-scale (over 100K) summarization feedback.
16
+
17
+ The feedback encompasses a wide range of input documents, from short to lengthy texts, including both dialogue and non-dialogue formats, and spans across seven distinct domains:
18
+
19
+ - Four non-dialouge domains: News, Lifestyle, Report, Medical
20
+ - Three dialogue domains: Daily Life, Interview, Meeting
21
+
22
+ This is automated evaluation results:
23
+
24
+ | **Config.** | **Faithfulness** | **Completeness** | **Conciseness** | **Average** |
25
+ |--------------------|------------|-----------|-----------|----------|
26
+ | Llama3-8B-Instruct | 0.864 | 0.583 | 0.450 | 0.632 |
27
+ | Llama3-70B-Instruct | 0.931 | 0.596 | 0.487 | 0.671 |
28
+ | GPT-4o | 0.940 | 0.657 | 0.437 | 0.678 |
29
+ | SummLlama3.1-70B | 0.942 | 0.637 | 0.909 | 0.829 |
30
+
31
+ Please refer to [our paper](https://arxiv.org/abs/2410.13116) to catch up how to exploit LLM-generated feedback in the context of text summarization.