File size: 4,000 Bytes
43052fe 33b34a2 43052fe 33b34a2 43052fe 3dae05a 43052fe 3dae05a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 |
---
dataset_info:
- config_name: progressive-clues
features:
- name: qc_id
dtype: string
- name: clue_text
dtype: string
- name: n_clues
dtype: int32
- name: clean_answers
sequence: string
- name: orig_qid
dtype: string
- name: full_quiz_question
dtype: string
- name: clue_spans
sequence:
sequence: int32
- name: orig_answer_string
dtype: string
- name: metadata
struct:
- name: category
dtype: string
- name: subcategory
dtype: string
- name: level
dtype: string
- name: question_set
dtype: string
splits:
- name: eval
num_bytes: 2799081
num_examples: 3042
download_size: 1269797
dataset_size: 2799081
- config_name: questions
features:
- name: qid
dtype: string
- name: category
dtype: string
- name: subcategory
dtype: string
- name: question
dtype: string
- name: clue_spans
sequence:
sequence: int32
- name: answer
dtype: string
- name: answer_primary
dtype: string
- name: clean_answers
sequence: string
- name: wiki_page
dtype: string
- name: page_summary
dtype: string
- name: difficulty
dtype: string
- name: question_set
dtype: string
splits:
- name: eval
num_bytes: 2250701
num_examples: 782
download_size: 1205383
dataset_size: 2250701
configs:
- config_name: progressive-clues
data_files:
- split: eval
path: progressive-clues/eval-*
- config_name: questions
data_files:
- split: eval
path: questions/eval-*
license: mit
task_categories:
- question-answering
- text-retrieval
- text-generation
language:
- en
tags:
- science
- physics
- biology
- chemistry
- history
- quiz
- question-answering
- quizbowl
- music
- politics
- literature
- television
pretty_name: Protobowl Quiz dataset
size_categories:
- 1K<n<10K
---
# Progressive Quiz Bowl Clues Dataset
## Overview
This dataset contains Quiz Bowl questions and their corresponding progressive clues, designed for evaluating question-answering systems.
The progressive clues subset contains additional features such as GPT-3.5 generated categories and subcategories specific to each progressive clue.
## Dataset Information
- **Name**: protobowl-11-13
- **Version**: 1.0
- **Maintainer**: mgor
- **Hub URL**: [https://huggingface.co/datasets/mgor/protobowl-11-13](https://huggingface.co/datasets/mgor/protobowl-11-13)
## Features
The `progressive-clues` subset dataset includes the following features for each entry:
- `qc_id`: Unique identifier for the question clue
- `orig_qid`: Original question ID
- `clue_text`: The text of the current clue
- `full_quiz_question`: The complete question text
- `n_clues`: Number of clues consumed
- `clue_spans`: Sequence of spans indicating the clue positions in the full question
- `answer`: The correct answer
- `clean_answers`: List of clean (normalized) acceptable answers
- `metadata`:
- `category`: Main category of the question
- `subcategory`: Subcategory of the question
- `difficulty`: Difficulty level of the question
- `question_set`: The tournament or set the question is from
## Data Processing
The progressive-clues subset of the dataset was created by processing the original Protobowl questions and applying the following transformations:
1. Generating categories and subcategories using GPT-3.5
2. Cleaning and normalizing answers
3. Structuring the data into a format suitable for progressive question-answering tasks
## Usage
To use this dataset, you can load it using the Hugging Face `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("mgor/protobowl-11-13", "progressive-clues", split="eval")
```
## Notes
- The dataset includes questions primarily from Middle School tournaments.
- Some questions may have multiple categories or subcategories assigned to them.
- The GPT-3.5 generated categories and subcategories may differ from the original Protobowl categories in some cases. |