Commit
•
07c122b
1
Parent(s):
1fa3a8e
Update metadata
Browse files
README.md
CHANGED
@@ -14,10 +14,10 @@ size_categories:
|
|
14 |
source_datasets:
|
15 |
- original
|
16 |
task_categories:
|
17 |
-
- conversational
|
18 |
- text-generation
|
19 |
- fill-mask
|
20 |
task_ids:
|
|
|
21 |
- dialogue-generation
|
22 |
- dialogue-modeling
|
23 |
- language-modeling
|
@@ -162,15 +162,19 @@ dataset_info:
|
|
162 |
|
163 |
- **Homepage:** https://worksheets.codalab.org/worksheets/0xa79833f4b3c24f4188cee7131b120a59
|
164 |
- **Repository:** https://github.com/google/airdialogue
|
165 |
-
- **Paper:** https://
|
166 |
- **Leaderboard:** https://worksheets.codalab.org/worksheets/0xa79833f4b3c24f4188cee7131b120a59
|
167 |
- **Point of Contact:** [AirDialogue-Google](mailto:airdialogue@gmail.com)
|
168 |
-
[
|
169 |
|
170 |
### Dataset Summary
|
171 |
|
172 |
AirDialogue, is a large dataset that contains 402,038 goal-oriented conversations. To collect this dataset, we create a contextgenerator which provides travel and flight restrictions. Then the human annotators are asked to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions.
|
173 |
|
|
|
|
|
|
|
|
|
174 |
### Supported Tasks and Leaderboards
|
175 |
|
176 |
We use perplexity and BLEU score to evaluate the quality of the language generated by the model. We also compare the dialogue state generated by the model s and the ground truth state s0. Two categories of the metrics are used: exact match scores and scaled scores
|
@@ -289,22 +293,28 @@ cc-by-nc-4.0
|
|
289 |
|
290 |
### Citation Information
|
291 |
|
|
|
292 |
@inproceedings{wei-etal-2018-airdialogue,
|
293 |
title = "{A}ir{D}ialogue: An Environment for Goal-Oriented Dialogue Research",
|
294 |
author = "Wei, Wei and
|
295 |
Le, Quoc and
|
296 |
Dai, Andrew and
|
297 |
Li, Jia",
|
|
|
|
|
|
|
|
|
298 |
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
|
299 |
month = oct # "-" # nov,
|
300 |
year = "2018",
|
301 |
address = "Brussels, Belgium",
|
302 |
publisher = "Association for Computational Linguistics",
|
303 |
-
url = "https://
|
304 |
doi = "10.18653/v1/D18-1419",
|
305 |
pages = "3844--3854",
|
306 |
abstract = "Recent progress in dialogue generation has inspired a number of studies on dialogue systems that are capable of accomplishing tasks through natural language interactions. A promising direction among these studies is the use of reinforcement learning techniques, such as self-play, for training dialogue agents. However, current datasets are limited in size, and the environment for training agents and evaluating progress is relatively unsophisticated. We present AirDialogue, a large dataset that contains 301,427 goal-oriented conversations. To collect this dataset, we create a context-generator which provides travel and flight restrictions. We then ask human annotators to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. Key to our environment is the ease of evaluating the success of the dialogue, which is achieved by using ground-truth states (e.g., the flight being booked) generated by the restrictions. Any dialogue agent that does not generate the correct states is considered to fail. Our experimental results indicate that state-of-the-art dialogue models can only achieve a score of 0.17 while humans can reach a score of 0.91, which suggests significant opportunities for future improvement.",
|
307 |
}
|
|
|
308 |
|
309 |
### Contributions
|
310 |
|
|
|
14 |
source_datasets:
|
15 |
- original
|
16 |
task_categories:
|
|
|
17 |
- text-generation
|
18 |
- fill-mask
|
19 |
task_ids:
|
20 |
+
- conversational
|
21 |
- dialogue-generation
|
22 |
- dialogue-modeling
|
23 |
- language-modeling
|
|
|
162 |
|
163 |
- **Homepage:** https://worksheets.codalab.org/worksheets/0xa79833f4b3c24f4188cee7131b120a59
|
164 |
- **Repository:** https://github.com/google/airdialogue
|
165 |
+
- **Paper:** https://aclanthology.org/D18-1419/
|
166 |
- **Leaderboard:** https://worksheets.codalab.org/worksheets/0xa79833f4b3c24f4188cee7131b120a59
|
167 |
- **Point of Contact:** [AirDialogue-Google](mailto:airdialogue@gmail.com)
|
168 |
+
- **Point of Contact:** [Wei Wei](mailto:wewei@google.com)
|
169 |
|
170 |
### Dataset Summary
|
171 |
|
172 |
AirDialogue, is a large dataset that contains 402,038 goal-oriented conversations. To collect this dataset, we create a contextgenerator which provides travel and flight restrictions. Then the human annotators are asked to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions.
|
173 |
|
174 |
+
News in v1.3:
|
175 |
+
- We have included the test split of the AirDialogue dataset.
|
176 |
+
- We have included the meta context for OOD2 in the original AirDialogue paper.
|
177 |
+
|
178 |
### Supported Tasks and Leaderboards
|
179 |
|
180 |
We use perplexity and BLEU score to evaluate the quality of the language generated by the model. We also compare the dialogue state generated by the model s and the ground truth state s0. Two categories of the metrics are used: exact match scores and scaled scores
|
|
|
293 |
|
294 |
### Citation Information
|
295 |
|
296 |
+
```bibtex
|
297 |
@inproceedings{wei-etal-2018-airdialogue,
|
298 |
title = "{A}ir{D}ialogue: An Environment for Goal-Oriented Dialogue Research",
|
299 |
author = "Wei, Wei and
|
300 |
Le, Quoc and
|
301 |
Dai, Andrew and
|
302 |
Li, Jia",
|
303 |
+
editor = "Riloff, Ellen and
|
304 |
+
Chiang, David and
|
305 |
+
Hockenmaier, Julia and
|
306 |
+
Tsujii, Jun{'}ichi",
|
307 |
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
|
308 |
month = oct # "-" # nov,
|
309 |
year = "2018",
|
310 |
address = "Brussels, Belgium",
|
311 |
publisher = "Association for Computational Linguistics",
|
312 |
+
url = "https://aclanthology.org/D18-1419",
|
313 |
doi = "10.18653/v1/D18-1419",
|
314 |
pages = "3844--3854",
|
315 |
abstract = "Recent progress in dialogue generation has inspired a number of studies on dialogue systems that are capable of accomplishing tasks through natural language interactions. A promising direction among these studies is the use of reinforcement learning techniques, such as self-play, for training dialogue agents. However, current datasets are limited in size, and the environment for training agents and evaluating progress is relatively unsophisticated. We present AirDialogue, a large dataset that contains 301,427 goal-oriented conversations. To collect this dataset, we create a context-generator which provides travel and flight restrictions. We then ask human annotators to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. Key to our environment is the ease of evaluating the success of the dialogue, which is achieved by using ground-truth states (e.g., the flight being booked) generated by the restrictions. Any dialogue agent that does not generate the correct states is considered to fail. Our experimental results indicate that state-of-the-art dialogue models can only achieve a score of 0.17 while humans can reach a score of 0.91, which suggests significant opportunities for future improvement.",
|
316 |
}
|
317 |
+
```
|
318 |
|
319 |
### Contributions
|
320 |
|