|
--- |
|
license: cc-by-nc-sa-4.0 |
|
task_categories: |
|
- question-answering |
|
language: |
|
- en |
|
configs: |
|
- config_name: train |
|
data_files: |
|
- split: 2ppl |
|
path: |
|
- train/people2_num200.jsonl |
|
- split: 3ppl |
|
path: |
|
- train/people3_num1000.jsonl |
|
- split: 4ppl |
|
path: |
|
- train/people4_num1000.jsonl |
|
- split: 5ppl |
|
path: |
|
- train/people5_num1000.jsonl |
|
- split: 6ppl |
|
path: |
|
- train/people6_num1000.jsonl |
|
- split: 7ppl |
|
path: |
|
- train/people7_num1000.jsonl |
|
- split: 8ppl |
|
path: |
|
- train/people8_num1000.jsonl |
|
- config_name: test |
|
data_files: |
|
- split: 2ppl |
|
path: |
|
- test/people2_num100.jsonl |
|
- split: 3ppl |
|
path: |
|
- test/people3_num100.jsonl |
|
- split: 4ppl |
|
path: |
|
- test/people4_num100.jsonl |
|
- split: 5ppl |
|
path: |
|
- test/people5_num100.jsonl |
|
- split: 6ppl |
|
path: |
|
- test/people6_num100.jsonl |
|
- split: 7ppl |
|
path: |
|
- test/people7_num100.jsonl |
|
- split: 8ppl |
|
path: |
|
- test/people8_num100.jsonl |
|
tags: |
|
- logical |
|
- reasoning |
|
pretty_name: K |
|
size_categories: |
|
- 1K<n<10K |
|
--- |
|
|
|
|
|
|
|
# π knights-and-knaves Dataset [[Project Page]](https://memkklogic.github.io/) |
|
|
|
The **knights-and-knaves dataset** serves as a logical reasoning benchmark to evaluate the reasoning capabilities of LLMs. |
|
|
|
**ππ Check out the [perturbed knights-and-knaves dataset](https://huggingface.co/datasets/K-and-K/perturbed-knights-and-knaves) to evaluate the memorization of LLMs in reasoning.** |
|
|
|
## Loading the dataset |
|
|
|
To load the dataset: |
|
|
|
```python |
|
from datasets import load_dataset |
|
data_subject = load_dataset('K-and-K/knights-and-knaves','test',split="2ppl") |
|
``` |
|
* Available subset: `test`, `train`. |
|
* Available split: `2ppl`,`3ppl`,`4ppl`,`5ppl`,`6ppl`,`7ppl`,`8ppl`. |
|
|
|
## π οΈ Codebase |
|
|
|
To evaluate LLMs on our datasets, visit our [GitHub repository](https://github.com/AlphaPav/mem-kk-logic/). |
|
|
|
## β Citing our Work |
|
|
|
If you find our codebase and datasets beneficial, kindly cite our work: |
|
|
|
```bibtex |
|
@article{xie2024memorization, |
|
title={On Memorization of Large Language Models in Logical Reasoning}, |
|
author={Chulin Xie and Yangsibo Huang and Chiyuan Zhang and Da Yu and Xinyun Chen and Bill Yuchen Lin and Bo Li and Badih Ghazi and Ravi Kumar}, |
|
year={2024}, |
|
eprint={2410.23123}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL}, |
|
url={https://arxiv.org/abs/2410.23123}, |
|
} |
|
``` |