Datasets:

Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
MapEval-API / README.md
mahirlabibdihan's picture
Update README.md
244fcc4 verified
metadata
license: apache-2.0
task_categories:
  - question-answering
  - multiple-choice
language:
  - en
size_categories:
  - n<1K
configs:
  - config_name: benchmark
    data_files:
      - split: test
        path: dataset.json
paperswithcode_id: mapeval-api
tags:
  - geospatial

MapEval-API

MapEval-API is created using MapQaTor.

Usage

from datasets import load_dataset

# Load dataset
ds = load_dataset("MapEval/MapEval-API", name="benchmark")

# Generate better prompts
for item in ds["test"]:
    # Start with a clear task description
    prompt = (
        "You are a highly intelligent assistant. "
        "Answer the multiple-choice question by selecting the correct option.\n\n"
        "Question:\n" + item["question"] + "\n\n"
        "Options:\n"
    )
    
    # List the options more clearly
    for i, option in enumerate(item["options"], start=1):
        prompt += f"{i}. {option}\n"
    
    # Add a concluding sentence to encourage selection of the answer
    prompt += "\nSelect the best option by choosing its number."

    # Use the prompt as needed
    print(prompt)  # Replace with your processing logic

Leaderboard

Model Overall Place Info Nearby Routing Trip Unanswerable
Claude-3.5-Sonnet 64.00 68.75 55.42 65.15 71.64 55.00
GPT-4-Turbo 53.67 62.50 50.60 60.61 50.75 25.00
GPT-4o 48.67 59.38 40.96 50.00 56.72 15.00
Gemini-1.5-Pro 43.33 65.63 30.12 40.91 34.33 65.00
Gemini-1.5-Flash 41.67 51.56 38.55 46.97 34.33 30.00
GPT-3.5-Turbo 27.33 39.06 22.89 33.33 19.40 15.00
GPT-4o-mini 23.00 28.13 14.46 13.64 43.28 5.00
Llama-3.2-90B 39.67 54.69 37.35 39.39 35.82 15.00
Llama-3.1-70B 37.67 53.13 32.53 42.42 31.34 15.00
Mixtral-8x7B 27.67 32.81 18.07 27.27 38.81 15.00
Gemma-2.0-9B 27.00 35.94 14.46 28.79 26.87 45.00

Comparison between ReAct and Chameleon with GPT-3.5-Turbo

Model Overall Place Info Nearby Routing Trip Unanswerable
ReAct 27.33 39.06 22.89 33.33 19.40 15.00
Chameleon 49.33 54.69 54.21 51.51 43.28 25.00

Citation

If you use this dataset, please cite the original paper:

@article{dihan2024mapeval,
  title={MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation Models},
  author={Dihan, Mahir Labib and Hassan, Md Tanvir and Parvez, Md Tanvir and Hasan, Md Hasebul and Alam, Md Almash and Cheema, Muhammad Aamir and Ali, Mohammed Eunus and Parvez, Md Rizwan},
  journal={arXiv preprint arXiv:2501.00316},
  year={2024}
}