metadata
language:
- en
WikiQuest-NLP Dataset
WikiQuest-NLP is a comprehensive dataset designed for training and evaluating NLP models.It is generated using the Google NQ dataset. It contains two main components:
- Filtered Wikipedia Data: A cleaned and filtered version of Wikipedia content, extracted and processed for NLP training.
- Question-Answer Pairs: A CSV file with contextually relevant questions and their corresponding answers, derived from the filtered Wikipedia text.
Features
Filtered Wikipedia Data (TXT)
- Content: This file contains text from Wikipedia, preprocessed to remove irrelevant content and formatted with newline characters to preserve paragraph structure.
- Format: Plain text file with newline-separated paragraphs for easy use in language modeling.
Question-Answer Pairs (CSV)
- Columns:
context
: The filtered Wikipedia text providing context for the question.question
: The question pertaining to the context.answer
: The answer to the question, extracted from the context.
- Format: Comma-separated values (CSV) file, suitable for direct use in question-answering model training.
Usage
- Language Modeling: Use the filtered Wikipedia data to train language models from scratch or as a base for further fine-tuning.
- Question Answering: Utilize the question-answer pairs for training models on question-answering tasks, enabling evaluation and improvement of QA systems.
How to Access
The dataset is hosted on Hugging Face, accessible for direct download and integration into your projects.
License
This dataset is created and provided for research and educational purposes. Ensure compliance with usage policies and licensing terms when incorporating the dataset into your work.
Contact
For any questions or issues regarding the dataset, please contact:
- Your Name: Piyush Bhatt
- Your GitHub: https://github.com/Piyush2102020
We hope WikiQuest-NLP serves as a valuable resource for your NLP research and model development. Happy modeling!