Datasets:

Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
MapEval-Textual / README.md
mahirlabibdihan's picture
Update README.md
b57dca1 verified
metadata
license: apache-2.0
language:
  - en
size_categories:
  - n<1K
task_categories:
  - question-answering
  - multiple-choice
configs:
  - config_name: benchmark
    data_files:
      - split: test
        path: dataset.json
tags:
  - geospatial
annotations_creators:
  - expert-generated
paperswithcode_id: mapeval-textual

MapEval-Textual

MapEval-Textual is created using MapQaTor.

Usage

from datasets import load_dataset

# Load dataset
ds = load_dataset("MapEval/MapEval-Textual", name="benchmark")

# Generate better prompts
for item in ds["test"]:
    # Start with a clear task description
    prompt = (
        "You are a highly intelligent assistant. "
        "Based on the given context, answer the multiple-choice question by selecting the correct option.\n\n"
        "Context:\n" + item["context"] + "\n\n"
        "Question:\n" + item["question"] + "\n\n"
        "Options:\n"
    )
    
    # List the options more clearly
    for i, option in enumerate(item["options"], start=1):
        prompt += f"{i}. {option}\n"
    
    # Add a concluding sentence to encourage selection of the answer
    prompt += "\nSelect the best option by choosing its number."

    # Use the prompt as needed
    print(prompt)  # Replace with your processing logic

Citation

If you use this dataset, please cite the original paper:

@article{dihan2024mapeval,
  title={MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation Models},
  author={Dihan, Mahir Labib and Hassan, Md Tanvir and Parvez, Md Tanvir and Hasan, Md Hasebul and Alam, Md Almash and Cheema, Muhammad Aamir and Ali, Mohammed Eunus and Parvez, Md Rizwan},
  journal={arXiv preprint arXiv:2501.00316},
  year={2024}
}