Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
knkarthick commited on
Commit
8d8e09f
1 Parent(s): 834c300

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -61
README.md CHANGED
@@ -6,7 +6,7 @@ language_creators:
6
  languages:
7
  - en
8
  licenses:
9
- - cc-by-nc-nd-4.0
10
  multilinguality:
11
  - monolingual
12
  size_categories:
@@ -20,83 +20,79 @@ task_categories:
20
  - email subject
21
  - meeting title
22
  task_ids:
23
- - summarization-other-conversations-summarization
24
- paperswithcode_id: samsum-corpus
25
- pretty_name: SAMSum Corpus
26
  ---
27
- # Dataset Card for SAMSum Corpus
28
  ## Dataset Description
29
- - **Homepage:** https://arxiv.org/abs/1911.12237v2
30
- - **Repository:** [Needs More Information]
31
- - **Paper:** https://arxiv.org/abs/1911.12237v2
32
- - **Leaderboard:** [Needs More Information]
33
  - **Point of Contact:** https://huggingface.co/knkarthick
 
34
  ### Dataset Summary
35
- The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger convesations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.
36
- The SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).
37
- ### Supported Tasks and Leaderboards
38
- [Needs More Information]
39
  ### Languages
40
  English
 
41
  ## Dataset Structure
42
  ### Data Instances
43
- The created dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people
44
  The first instance in the training set:
45
- {'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"}
46
  ### Data Fields
47
  - dialogue: text of dialogue.
48
  - summary: human written summary of the dialogue.
49
- - id: unique id of an example.
 
 
50
  ### Data Splits
51
- - train: 14732
52
- - val: 818
53
- - test: 819
 
 
54
  ## Dataset Creation
55
  ### Curation Rationale
56
  In paper:
57
- > In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assistant and a client buying petrol.
58
- As a consequence, we decided to create a chat dialogue dataset by constructing such conversations that would epitomize the style of a messenger app.
59
- ### Source Data
60
- #### Initial Data Collection and Normalization
61
- In paper:
62
- > We asked linguists to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. It includes chit-chats, gossiping about friends, arranging meetings, discussing politics, consulting university assignments with colleagues, etc. Therefore, this dataset does not contain any sensitive data or fragments of other corpora.
63
- #### Who are the source language producers?
 
 
 
 
 
 
 
 
64
  linguists
65
- ### Annotations
66
- #### Annotation process
67
- In paper:
68
- > Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one reference summary.
69
- #### Who are the annotators?
70
  language experts
71
- ### Personal and Sensitive Information
72
- None, see above: Initial Data Collection and Normalization
73
- ## Considerations for Using the Data
74
- ### Social Impact of Dataset
75
- [Needs More Information]
76
- ### Discussion of Biases
77
- [Needs More Information]
78
- ### Other Known Limitations
79
- [Needs More Information]
80
- ## Additional Information
81
- ### Dataset Curators
82
- [Needs More Information]
83
- ### Licensing Information
84
- non-commercial licence: CC BY-NC-ND 4.0
85
- ### Citation Information
86
  ```
87
- @inproceedings{gliwa-etal-2019-samsum,
88
- title = "{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization",
89
- author = "Gliwa, Bogdan and
90
- Mochol, Iwona and
91
- Biesek, Maciej and
92
- Wawer, Aleksander",
93
- booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization",
94
- month = nov,
95
- year = "2019",
96
- address = "Hong Kong, China",
97
  publisher = "Association for Computational Linguistics",
98
- url = "https://www.aclweb.org/anthology/D19-5409",
99
- doi = "10.18653/v1/D19-5409",
100
- pages = "70--79"
101
- }
102
- ```
 
 
6
  languages:
7
  - en
8
  licenses:
9
+ - mit
10
  multilinguality:
11
  - monolingual
12
  size_categories:
 
20
  - email subject
21
  - meeting title
22
  task_ids:
23
+ - DialogSum: A Real-life Scenario Dialogue Summarization Dataset [Refer GIT]
24
+ pretty_name: DIALOGSum Corpus
 
25
  ---
26
+ # Dataset Card for DIALOGSum Corpus
27
  ## Dataset Description
28
+ ### Links
29
+ - **Homepage:** https://aclanthology.org/2021.findings-acl.449
30
+ - **Repository:** https://github.com/cylnlp/dialogsum
31
+ - **Paper:** https://aclanthology.org/2021.findings-acl.449
32
  - **Point of Contact:** https://huggingface.co/knkarthick
33
+
34
  ### Dataset Summary
35
+ DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics.
 
 
 
36
  ### Languages
37
  English
38
+
39
  ## Dataset Structure
40
  ### Data Instances
41
+ DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 dialogues split into train, test and validation.
42
  The first instance in the training set:
43
+ {'id': 'train_0', 'summary': "Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.", 'dialogue': "#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\n#Person2#: I found it would be a good idea to get a check-up.\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\n#Person2#: Ok.\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\n#Person2#: Yes.\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\n#Person2#: Ok, thanks doctor.", 'topic': "get a check-up}
44
  ### Data Fields
45
  - dialogue: text of dialogue.
46
  - summary: human written summary of the dialogue.
47
+ - topic: human written topic/one liner of the dialogue.
48
+ - id: unique file id of an example.
49
+
50
  ### Data Splits
51
+ - train: 12460
52
+ - val: 1500
53
+ - test: 1500
54
+ - holdout: 100 [Only 3 features: id, dialogue, topic]
55
+
56
  ## Dataset Creation
57
  ### Curation Rationale
58
  In paper:
59
+ We collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers.
60
+
61
+ Compared with previous datasets, dialogues from DialogSum have distinct characteristics:
62
+
63
+ Under rich real-life scenarios, including more diverse task-oriented scenarios;
64
+ Have clear communication patterns and intents, which is valuable to serve as summarization sources;
65
+ Have a reasonable length, which comforts the purpose of automatic summarization.
66
+
67
+ We ask annotators to summarize each dialogue based on the following criteria:
68
+ Convey the most salient information;
69
+ Be brief;
70
+ Preserve important named entities within the conversation;
71
+ Be written from an observer perspective;
72
+ Be written in formal language.
73
+ ### Who are the source language producers?
74
  linguists
75
+ ### Who are the annotators?
 
 
 
 
76
  language experts
77
+
78
+ ## Licensing Information
79
+ non-commercial licence: MIT
80
+ ## Citation Information
 
 
 
 
 
 
 
 
 
 
 
81
  ```
82
+ @inproceedings{chen-etal-2021-dialogsum,
83
+ title = "{D}ialog{S}um: {A} Real-Life Scenario Dialogue Summarization Dataset",
84
+ author = "Chen, Yulong and
85
+ Liu, Yang and
86
+ Chen, Liang and
87
+ Zhang, Yue",
88
+ booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
89
+ month = aug,
90
+ year = "2021",
91
+ address = "Online",
92
  publisher = "Association for Computational Linguistics",
93
+ url = "https://aclanthology.org/2021.findings-acl.449",
94
+ doi = "10.18653/v1/2021.findings-acl.449",
95
+ pages = "5062--5074",
96
+ ```
97
+ ## Contributions
98
+ Thanks to [@cylnlp](https://github.com/cylnlp) for adding this dataset.