SummLlama3.1-70B / README.md
Hwanjun's picture
Update README.md
e4c7f59 verified
|
raw
history blame
1.46 kB
metadata
base_model:
  - meta-llama/Llama-3.1-70B-Instruct
pipeline_tag: summarization
SummLlama3.1-70B

Are you looking for a summarizer that can generate more human-preferred summaries across multiple domains?

Our SummLlama3.1-70B could be exactly what you need!

SummLlama3.1-70B is initialized from Llama3.1-70B-Instruct, with additional training using Direct Preference Optimization (DPO) based on large-scale (over 100K) summarization feedback.

The feedback encompasses a wide range of input documents, from short to lengthy texts, including both dialogue and non-dialogue formats, and spans across seven distinct domains:

  • Four non-dialouge domains: News, Lifestyle, Report, Medical
  • Three dialogue domains: Daily Life, Interview, Meeting

This is automated evaluation results:

Config. Faithfulness Completeness Conciseness Average
Llama3-70B-Instruct 0.931 0.596 0.487 0.671
Llama3.1-70B-Instruct 0.927 0.624 0.458 0.670
GPT-4o 0.940 0.657 0.437 0.678
SummLlama3.1-70B 0.942 0.637 0.909 0.829

Please refer to our paper to catch up how to exploit LLM-generated feedback in the context of text summarization.