configs:
- config_name: default
data_files:
- split: train
path:
- '*/*_clustered/*/*_response_L512_train.json'
- split: dev
path:
- '*/*_clustered/*/*_response_L512_dev.json'
Multi-Questioner Dialogue (MQDialog) Dataset
Dataset Details
Dataset Description
The Multi-Questioner Dialogue (MQDialog) dataset is designed to facilitate research in questioner-aware personalization. It contains dialogues with various questioners for each reponder. The dataset is derived from English and Chinese scripts of popular TV shows and real-world conversations. It includes dialogues where selected leading actors act as responders, while other characters or contacts serve as questioners. The dataset contains a diverse set of 12 responders and 173 questioners. The dataset supports research on dialogue generation, response evaluation, and questioner-aware personalization in multi-turn conversations.
Dataset Sources
- English scripts: The Big Bang Theory, Friends, and Modern Family.
- Chinese scripts: My Own Swordsman and Empresses in the Palace.
- Real-world conversations (WeChat): Records from a single user, focusing on two-person chats. (Not public, but you can extract the data using the code we provided)
Direct Use
The dataset is suitable for:
- Training and evaluating questioner-aware multi-turn dialogue systems.
- Studying personality-aligned response generation.
- Benchmarking the performance of dialogue models with multi-questioner setups.
Dataset Structure
- Responders: 12 leading actors from TV scripts and a single WeChat user.
- Questioners: 173 individuals interacting with the responders, the detailed information is listed in the Table.
- Splits: Randomly divided into training (3761 dialogues per responder on average) and testing (917 dialogues per responder on average).
Language | Data Source | # Questioners | Questioner Examples | Responder | # train | # test |
English | The Big Bang Theory | 14 | Priya, Barry, Howard, Leonard, etc. | Sheldon | 4805 | 1101 |
12 | Bernadette, Penny, Raj, Stuart, etc. | Leonard | 4607 | 1014 | ||
Friends | 12 | Amy, Chandler, Charlie, Joey, etc. | Rachel | 3768 | 870 | |
20 | Ben, Mike, Gary, Paul, etc. | Ross | 3839 | 960 | ||
Modern Family | 9 | Alex, Cameron, Dylan, Gloria, etc. | Claire | 1161 | 281 | |
8 | Haley, Jay, Luke, Mitchell, etc. | Phil | 881 | 246 | ||
Chinese | My Own Swordsman | 16 | Bai Sanniang, Guo Furong, Mo Xiaobei, etc. | Tong Xiangyu | 3200 | 831 |
16 | Bao Daren, Ji Wuming, Zhu Wushuang, etc. | Bai Zhantang | 2995 | 857 | ||
8 | Li Dazui, Xing Butou, Yan Xiaoliu, etc. | Lv Xiucai | 1635 | 409 | ||
Empresses in the Palace | 17 | Cao Guiren, Mei Zhuang, Liu Zhu, etc. | Zhen Huan | 1229 | 350 | |
11 | Consort Hua, Empress, Huan Bi, etc. | Emperor | 704 | 200 | ||
WeChat Records | 30 | Author's contacts | Author | - | - |
Data Files & Code
For each responder, dialogues with different questioners are stored in the corresponding folder, diags_two_role_{responder_name}
. Intermediate results from data processing are also provided. The final datasets used for questioner-aware personalization are:
{script_name}_diags_{responder_name}_{questioner_name}_{responder_name}_response_L512_dev.json
{script_name}_diags_{responder_name}_{questioner_name}_{responder_name}_response_L512_train.json
Additionally, dialogues with different questioners are clustered based on query similarity. The clustering results are stored in the diags_two_role_{responder_name}_clustered
folder.
We have provided the preprocessed raw data for each scripts named with {script_name}_dialgs.json
. To extract dialogues for one responder, please run the python file extract_two_role_diag_{responder_name}.py
under each subfolder.
Related functions:
get_role_list()
: get whole role nameextract_diag_between_two_role()
: extract and only reserve diags between two rolesclean_diag()
: remove duplicates, remove conversations with only one person, and remove empty valuesclean_diag_with_repeated()
: remove conversations with only one person, and remove empty valuessplit_train_and_dev()
: split training set and validation setsplit_diag_with_sliding_window()
: construct diags with limited length through a sliding windowextract_diag_for_target_from_role_conv()
: only reserve diags that the response is from target role
Data Instances
Below is an example from the dataset, it contains conversations between the target_role
(i.e. responder
) and the input_role
(i.e. questioner
).
{
"id": "episode_14_chunk_6_index_0_part2_piece_0",
"conversations": [
{
"from": "Bernadette",
"value": "Did you hear? Isn’t it terrible?"
},
{
"from": "Leonard",
"value": "Have you seen him?"
},
{
"from": "Bernadette",
"value": "They wouldn’t let me in. Oh my Howie."
},
{
"from": "Leonard",
"value": "It’ll be okay. It’ll be okay."
}
],
"target_role": "Leonard",
"target_role_short": "Leonard",
"input_role": "Bernadette",
"input_role_short": "Bernadette",
"role_pair_id": 8,
"cluster_id": 2 (Only in the clustered data)
}
Dataset Creation
Curation Rationale
MQDialog was created to address the need for a multilingual, multi-questioner dataset that reflects questioner-aware personalized response generation in diverse conversational contexts.
Data Collection and Processing
- Scripts: Extracted dialogues between a responder (leading actor) and questioners (other characters), ensuring a clean dataset by removing errors, repeated content, and irrelevant entries.
- Real-world records: Focused on one-on-one conversations, with new dialogue sessions defined by a time gap (e.g., 3 hours).
- Filtering: Questioners with fewer than 20 interactions were excluded to ensure meaningful analysis.
Recommendations
- Use the dataset in conjunction with other corpora to mitigate cultural or linguistic biases.
- Ensure responsible use of the data, particularly when training models for real-world applications.