Datasets:
File size: 1,783 Bytes
8e29757 eb8884a b57dca1 4958daf 77de85b 91919c4 8e29757 b41a894 c8e3328 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
---
license: apache-2.0
language:
- en
size_categories:
- n<1K
task_categories:
- question-answering
- multiple-choice
configs:
- config_name: benchmark
data_files:
- split: test
path: dataset.json
tags:
- geospatial
annotations_creators:
- expert-generated
paperswithcode_id: mapeval-textual
---
# MapEval-Textual
[MapEval](https://arxiv.org/abs/2501.00316)-Textual is created using [MapQaTor](https://arxiv.org/abs/2412.21015).
## Usage
```python
from datasets import load_dataset
# Load dataset
ds = load_dataset("MapEval/MapEval-Textual", name="benchmark")
# Generate better prompts
for item in ds["test"]:
# Start with a clear task description
prompt = (
"You are a highly intelligent assistant. "
"Based on the given context, answer the multiple-choice question by selecting the correct option.\n\n"
"Context:\n" + item["context"] + "\n\n"
"Question:\n" + item["question"] + "\n\n"
"Options:\n"
)
# List the options more clearly
for i, option in enumerate(item["options"], start=1):
prompt += f"{i}. {option}\n"
# Add a concluding sentence to encourage selection of the answer
prompt += "\nSelect the best option by choosing its number."
# Use the prompt as needed
print(prompt) # Replace with your processing logic
```
## Citation
If you use this dataset, please cite the original paper:
```
@article{dihan2024mapeval,
title={MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation Models},
author={Dihan, Mahir Labib and Hassan, Md Tanvir and Parvez, Md Tanvir and Hasan, Md Hasebul and Alam, Md Almash and Cheema, Muhammad Aamir and Ali, Mohammed Eunus and Parvez, Md Rizwan},
journal={arXiv preprint arXiv:2501.00316},
year={2024}
}
``` |