language:
- en
paperswithcode_id: social-iqa
pretty_name: Social Interaction QA
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answerA
dtype: string
- name: answerB
dtype: string
- name: answerC
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 6389954
num_examples: 33410
- name: validation
num_bytes: 376508
num_examples: 1954
download_size: 2198056
dataset_size: 6766462
Dataset Card for "social_i_qa"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://leaderboard.allenai.org/socialiqa/submissions/get-started
- Repository: More Information Needed
- Paper: More Information Needed
- Point of Contact: More Information Needed
- Size of downloaded dataset files: 2.20 MB
- Size of the generated dataset: 6.76 MB
- Total amount of disk used: 8.97 MB
Dataset Summary
We introduce Social IQa: Social Interaction QA, a new question-answering benchmark for testing social commonsense intelligence. Contrary to many prior benchmarks that focus on physical or taxonomic knowledge, Social IQa focuses on reasoning about people’s actions and their social implications. For example, given an action like "Jesse saw a concert" and a question like "Why did Jesse do this?", humans can easily infer that Jesse wanted "to see their favorite performer" or "to enjoy the music", and not "to see what's happening inside" or "to see if it works". The actions in Social IQa span a wide variety of social situations, and answer candidates contain both human-curated answers and adversarially-filtered machine-generated candidates. Social IQa contains over 37,000 QA pairs for evaluating models’ abilities to reason about the social implications of everyday events and situations. (Less)
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
default
- Size of downloaded dataset files: 2.20 MB
- Size of the generated dataset: 6.76 MB
- Total amount of disk used: 8.97 MB
An example of 'validation' looks as follows.
{
"answerA": "sympathetic",
"answerB": "like a person who was unable to help",
"answerC": "incredulous",
"context": "Sydney walked past a homeless woman asking for change but did not have any money they could give to her. Sydney felt bad afterwards.",
"label": "1",
"question": "How would you describe Sydney?"
}
Data Fields
The data fields are the same among all splits.
default
context
: astring
feature.question
: astring
feature.answerA
: astring
feature.answerB
: astring
feature.answerC
: astring
feature.label
: astring
feature.
Data Splits
name | train | validation |
---|---|---|
default | 33410 | 1954 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
Contributions
Thanks to @bhavitvyamalik, @thomwolf, @patrickvonplaten, @lewtun for adding this dataset.