|
--- |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- path: train/*.arrow |
|
split: train |
|
task_categories: |
|
- text-generation |
|
language: |
|
- en |
|
size_categories: |
|
- 1M<n<10M |
|
pretty_name: conditional task generation with attributes |
|
--- |
|
|
|
# Dataset Card for ctga-v1 |
|
|
|
## Dataset Details |
|
|
|
`ctga-v1` or conditional task generation with attributes is a new dataset created by remixing existing instruction tuning datasets ([P3](https://github.com/bigscience-workshop/promptsource)) to train [Bonito](https://huggingface.co/BatsResearch/bonito-v1). |
|
|
|
```python3 |
|
from datasets import load_dataset |
|
dataset = load_dataset("BatsResearch/ctga-v1") |
|
``` |
|
|
|
### Dataset Description |
|
|
|
- **Repository:** [Github Repo](https://github.com/BatsResearch/bonito) |
|
- **Paper:** [Arxiv](TODO) |
|
- **Point of Contact:** [Nihal V. Nayak](mailto:nnayak2@cs.brown.edu) |
|
|
|
## Dataset Creation |
|
|
|
The dataset is derived from [P3](https://github.com/bigscience-workshop/promptsource) by annotating 323 prompt templates from 39 datasets with 16 task types. |
|
|
|
The prompt templates in P3 are remixed to create the meta-templates, which, in turn, generate the training examples. |
|
|
|
The meta-template input has a task type (<|tasktype|>) as an attribute followed by the unannotated text or context (<|context|>). |
|
|
|
The output of the meta-template comprises the attributed task with the prompt or task description and the context ({context}) followed by a pipe symbol (<|pipe|>) and the solution to the task. |
|
|
|
We use the <|pipe|> symbol to separate the instruction and response pair that is used for adapting the downstream model. |
|
|
|
|
|
### Data Instances |
|
|
|
Each data instance contains the following features: _context_, _task_input_ _task_output_ _dataset_ _dataset_config_ _task_type_ _input_ and _output_. |
|
|
|
The (_input_, _output_) is the pair we used to train Bonito model. |
|
|
|
|
|
### Data Fields |
|
|
|
- 'context': input context |
|
- 'task_input': prompted input without context |
|
- 'task_output': corrosponding output |
|
- 'dataset': source dataset |
|
- 'dataset_config': source dataset configuration |
|
- 'task_type': corrsponding task type |
|
- 'input': reformatted input |
|
- 'output': reformatted output |
|
|
|
|
|
### Source Data |
|
|
|
All the datasets are sourced from the datasets library. |
|
|
|
- Extractive Question Answering & Question Generation |
|
- adversarial_qa/dbert |
|
- adversarial_qa/dbidaf |
|
- adversarial_qa/droberta |
|
- duorc/ParaphraseRC |
|
- duorc/SelfRC |
|
- squad |
|
|
|
- Topic Classification |
|
- ag_news |
|
- dbpedia_14 |
|
- hellaswag |
|
- duorc/ParaphraseRC |
|
- duorc/SelfRC |
|
- squad |
|
|
|
- Sentiment Analysis |
|
- amazon_polarity |
|
- imdb |
|
- rotten_tomatoes |
|
- yelp_review_full |
|
|
|
- Natural Language Inference |
|
- anli |
|
- super_glue/cb |
|
|
|
- Multiple-Choice Question Answering |
|
- app_reviews |
|
- cosmos_qa |
|
- dream |
|
- qasc |
|
- quail |
|
- quartz |
|
- race/all |
|
- social_i_qa |
|
- super_glue/boolq |
|
- super_glue/record |
|
- wiki_hop/original |
|
|
|
- Text Generation |
|
- app_reviews |
|
- cnn_dailymail/3.0.0 |
|
- dream |
|
- duorc/ParaphraseRC |
|
- duorc/SelfRC |
|
- gigaword |
|
- samsum |
|
|
|
- Summarization |
|
- cnn_dailymail/3.0.0 |
|
- duorc/ParaphraseRC |
|
- duorc/SelfRC |
|
- gigaword |
|
- multi_newspaws/labeled_final |
|
- samsum |
|
- xsum |
|
|
|
- Paraphrase Generation & Identification |
|
- glue/mrpc |
|
- multi_newspaws/labeled_final |
|
|
|
- Yes-No Question Answering |
|
- race/all |
|
- social_i_qa |
|
- super_glue/boolq |
|
|
|
- Sentence Completion |
|
- hellaswag |
|
- super_glue/copa |
|
|
|
- Textual Entailment |
|
- super_glue/rte |
|
|
|
- Word Sense Disambiguation |
|
- super_glue/wic |
|
|
|
- Coreference Resolution |
|
- super_glue/wsc.fixed |
|
|
|
|
|
## Citation |
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
|
|
**BibTeX:** |
|
|
|
``` |
|
@inproceedings{bonito:aclfindings24, |
|
title = {Learning to Generate Instruction Tuning Datasets for Zero-Shot Task Adaptation}, |
|
author = {Nayak, Nihal V. and Nan, Yiyang and Trost, Avi and Bach, Stephen H.}, |
|
booktitle = {Findings of the Association for Computational Linguistics: ACL 2024}, |
|
year = {2024}} |
|
``` |
|
|