Update README.md
Browse files
README.md
CHANGED
@@ -7,3 +7,23 @@ This repository contains the `NYT-Connections` dataset proposed in the work *NYT
|
|
7 |
Authors: Angel Yahir Loredo Lopez, Tyler McDonald, Ali Emami
|
8 |
## Paper Abstract
|
9 |
Large Language Models (LLMs) have shown impressive performance on various benchmarks, yet their ability to engage in deliberate reasoning remains questionable. We present NYT-Connections, a collection of 358 simple word classification puzzles derived from the New York Times Connections game. This benchmark is designed to penalize quick, intuitive ``System 1'' thinking, isolating fundamental reasoning skills. We evaluated six recent LLMs, a simple machine learning heuristic, and humans across three configurations: single-attempt, multiple attempts without hints, and multiple attempts with contextual hints. Our findings reveal a significant performance gap: even top-performing LLMs like GPT-4 fall short of human performance by nearly 30\%. Notably, advanced prompting techniques such as Chain-of-Thought and Self-Consistency show diminishing returns as task difficulty increases. NYT-Connections uniquely combines linguistic isolation, resistance to intuitive shortcuts, and regular updates to mitigate data leakage, offering a novel tool for assessing LLM reasoning capabilities.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
Authors: Angel Yahir Loredo Lopez, Tyler McDonald, Ali Emami
|
8 |
## Paper Abstract
|
9 |
Large Language Models (LLMs) have shown impressive performance on various benchmarks, yet their ability to engage in deliberate reasoning remains questionable. We present NYT-Connections, a collection of 358 simple word classification puzzles derived from the New York Times Connections game. This benchmark is designed to penalize quick, intuitive ``System 1'' thinking, isolating fundamental reasoning skills. We evaluated six recent LLMs, a simple machine learning heuristic, and humans across three configurations: single-attempt, multiple attempts without hints, and multiple attempts with contextual hints. Our findings reveal a significant performance gap: even top-performing LLMs like GPT-4 fall short of human performance by nearly 30\%. Notably, advanced prompting techniques such as Chain-of-Thought and Self-Consistency show diminishing returns as task difficulty increases. NYT-Connections uniquely combines linguistic isolation, resistance to intuitive shortcuts, and regular updates to mitigate data leakage, offering a novel tool for assessing LLM reasoning capabilities.
|
10 |
+
|
11 |
+
## Puzzle Description
|
12 |
+
*NYT-Connections* puzzles are a subset of the New York Times' daily *Connections* contests. Each puzzle consists of 16 words, with the goal being to sort these words into 4 correct groupings of varying difficulty. The base game allows for hints when a solution is one word away from being a correct group, and allows up to 4 mistakes. Thus, the goal is to correctly identify all 4 groups without committing 4 mistakes.
|
13 |
+
|
14 |
+
## Data Description
|
15 |
+
`date` - the original date the contest was offered.
|
16 |
+
|
17 |
+
`contest` - the title string for the contest.
|
18 |
+
|
19 |
+
`words` - the collection of 16 words available for use in puzzle solving.
|
20 |
+
|
21 |
+
`answers` - an array of objects, where each object is a correct group and contains:
|
22 |
+
|
23 |
+
- `answerDescription` - the group name
|
24 |
+
- `words` - the 4 words that classify into this group
|
25 |
+
|
26 |
+
`difficulty` - the difficulty of the puzzle as rated by community contributors.
|
27 |
+
|
28 |
+
## Citation
|
29 |
+
*To be added.*
|