Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
knights-and-knaves / README.md
alphapav's picture
Update README.md
2f68547 verified
metadata
license: cc-by-nc-sa-4.0
task_categories:
  - question-answering
language:
  - en
configs:
  - config_name: train
    data_files:
      - split: 2ppl
        path:
          - train/people2_num200.jsonl
      - split: 3ppl
        path:
          - train/people3_num1000.jsonl
      - split: 4ppl
        path:
          - train/people4_num1000.jsonl
      - split: 5ppl
        path:
          - train/people5_num1000.jsonl
      - split: 6ppl
        path:
          - train/people6_num1000.jsonl
      - split: 7ppl
        path:
          - train/people7_num1000.jsonl
      - split: 8ppl
        path:
          - train/people8_num1000.jsonl
  - config_name: test
    data_files:
      - split: 2ppl
        path:
          - test/people2_num100.jsonl
      - split: 3ppl
        path:
          - test/people3_num100.jsonl
      - split: 4ppl
        path:
          - test/people4_num100.jsonl
      - split: 5ppl
        path:
          - test/people5_num100.jsonl
      - split: 6ppl
        path:
          - test/people6_num100.jsonl
      - split: 7ppl
        path:
          - test/people7_num100.jsonl
      - split: 8ppl
        path:
          - test/people8_num100.jsonl
tags:
  - logical
  - reasoning
pretty_name: K
size_categories:
  - 1K<n<10K

๐Ÿ“˜ knights-and-knaves Dataset [Project Page]

The knights-and-knaves dataset serves as a logical reasoning benchmark to evaluate the reasoning capabilities of LLMs.

๐Ÿš€๐Ÿš€ Check out the perturbed knights-and-knaves dataset to evaluate the memorization of LLMs in reasoning.

Loading the dataset

To load the dataset:

from datasets import load_dataset
data_subject = load_dataset('K-and-K/knights-and-knaves','test',split="2ppl")
  • Available subset: test, train.
  • Available split: 2ppl,3ppl,4ppl,5ppl,6ppl,7ppl,8ppl.

๐Ÿ› ๏ธ Codebase

To evaluate LLMs on our datasets, visit our GitHub repository.

โญ Citing our Work

If you find our codebase and datasets beneficial, kindly cite our work:

@article{xie2024memorization,
title={On Memorization of Large Language Models in Logical Reasoning}, 
author={Chulin Xie and Yangsibo Huang and Chiyuan Zhang and Da Yu and Xinyun Chen and Bill Yuchen Lin and Bo Li and Badih Ghazi and Ravi Kumar},
year={2024},
eprint={2410.23123},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.23123}, 
}