Datasets:
mahirlabibdihan
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -17,4 +17,38 @@ annotations_creators:
|
|
17 |
- expert-generated
|
18 |
paperswithcode_id: mapeval-textual
|
19 |
---
|
20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
- expert-generated
|
18 |
paperswithcode_id: mapeval-textual
|
19 |
---
|
20 |
+
|
21 |
+
|
22 |
+
# MapEval-Textual
|
23 |
+
|
24 |
+
[MapEval](https://arxiv.org/abs/2501.00316)-Textual is created using [MapQaTor](https://arxiv.org/abs/2412.21015).
|
25 |
+
|
26 |
+
## Usage
|
27 |
+
|
28 |
+
```python
|
29 |
+
from datasets import load_dataset
|
30 |
+
|
31 |
+
# Load dataset
|
32 |
+
ds = load_dataset("MapEval/MapEval-Textual", name="benchmark")
|
33 |
+
|
34 |
+
# Generate better prompts
|
35 |
+
for item in ds["test"]:
|
36 |
+
# Start with a clear task description
|
37 |
+
prompt = (
|
38 |
+
"You are a highly intelligent assistant. "
|
39 |
+
"Based on the given context, answer the multiple-choice question by selecting the correct option.\n\n"
|
40 |
+
"Context:\n" + item["context"] + "\n\n"
|
41 |
+
"Question:\n" + item["question"] + "\n\n"
|
42 |
+
"Options:\n"
|
43 |
+
)
|
44 |
+
|
45 |
+
# List the options more clearly
|
46 |
+
for i, option in enumerate(item["options"], start=1):
|
47 |
+
prompt += f"{i}. {option}\n"
|
48 |
+
|
49 |
+
# Add a concluding sentence to encourage selection of the answer
|
50 |
+
prompt += "\nSelect the best option by choosing its number."
|
51 |
+
|
52 |
+
# Use the prompt as needed
|
53 |
+
print(prompt) # Replace with your processing logic
|
54 |
+
```
|