doi
float64 2.21k
2.31k
| id
float64 2.21k
2.31k
| title
stringlengths 29
148
| summary
stringlengths 514
1.92k
| source
stringlengths 31
31
| authors
stringlengths 29
7.54k
| categories
stringlengths 9
52
| comment
stringlengths 8
195
⌀ | journal_ref
stringclasses 3
values | primary_category
stringclasses 8
values | published
int64 20.2M
20.2M
| updated
int64 20.2M
20.2M
| references
stringlengths 31
246k
| content
stringlengths 8.63k
614k
| noref_content
stringlengths 3.27k
608k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,206.02336 | 2,206.02336 | Making Large Language Models Better Reasoners with Step-Aware Verifier | Few-shot learning is a challenging task that requires language models to
generalize from limited examples. Large language models like GPT-3 and PaLM
have made impressive progress in this area, but they still face difficulties in
reasoning tasks such as GSM8K, a benchmark for arithmetic problems. To improve
their reasoning skills, previous work has proposed to guide the language model
with prompts that elicit a series of reasoning steps before giving the final
answer, achieving a significant improvement on GSM8K from 17.9% to 58.1% in
problem-solving rate. In this paper, we present DIVERSE (Diverse Verifier on
Reasoning Step), a novel approach that further enhances the reasoning
capability of language models. DIVERSE has three main components: first, it
generates diverse prompts to explore different reasoning paths for the same
question; second, it uses a verifier to filter out incorrect answers based on a
weighted voting scheme; and third, it verifies each reasoning step individually
instead of the whole chain. We evaluate DIVERSE on the latest language model
code-davinci-002 and show that it achieves new state-of-the-art results on six
of eight reasoning benchmarks (e.g., GSM8K 74.4% to 83.2%). | http://arxiv.org/pdf/2206.02336 | ['Yifei Li' 'Zeqi Lin' 'Shizhuo Zhang' 'Qiang Fu' 'Bei Chen'
'Jian-Guang Lou' 'Weizhu Chen'] | ['cs.CL' 'cs.AI'] | null | null | cs.CL | 20,220,606 | 20,230,524 |
* D. Andor, L. He, K. Lee, and E. Pitler (2019)Giving BERT a calculator: finding operations and arguments with reading comprehension. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 5947-5952. External Links: Link Cited by: SS1.
* A. Asai and H. Hajishirzi (2020)Logic-guided data augmentation and regularization for consistent question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, pp. 5642-5650. External Links: Link Cited by: SS1.
* L. Bauer, Y. Wang, and M. Bansal (2018)Commonsense for generative multi-hop question answering tasks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 4220-4230. External Links: Link Cited by: SS1.
* C. Bhagavatula, R. L. Bras, C. Malaviya, K. Sakaguchi, A. Holtzman, H. Rashkin, D. Downey, S. Wen-tau Yih, and Y. Choi (2019)Abductive common-sense reasoning. External Links: 1905.02177 Cited by: SS1.
* T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. (2020)Language models are few-shot learners. Advances in neural information processing systems33, pp. 1877-1901. External Links: Link, 2004.1177 Cited by: SS1.
* G. Campagna, A. Foryciarz, M. Moradshahi, and M. Lam (2020)Zero-shot transfer learning with synthesized data for multi-domain dialogue state tracking. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, pp. 122-132. External Links: Link, 2004.1177 Cited by: SS1.
* W. Chen, H. Zha, Z. Chen, W. Xiong, H. Wang, and W. Yang Wang (2020)HybridQA: a dataset of multi-hop question answering over tabular and textual data. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online, pp. 1026-1036. External Links: Link, 2004.1177 Cited by: SS1.
* A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, et al. (2022)Palm: scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Cited by: SS1.
* K. Cobbe, V. Kosaraju, M. Bavarian, J. Hilton, R. Nakano, C. Hesse, and J. Schulman (2021)Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1037. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.0206 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.0206 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.0206 Cited by: SS1.
* M. C. D. Manning, R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.0206 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,Antonia Creswell, Murray Shanahan, and Irina Higgins. 2022. Selection-inference: Exploiting large language models for interpretable logical reasoning.
* Deng et al. (2021) Xiang Deng, Yu Su, Alyssa Lees, You Wu, Cong Yu, and Huan Sun. 2021. ReasonBERT: Pretrained to reason with distant supervision. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_, pages 6112-6127, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Ding et al. (2019) Ming Ding, Chang Zhou, Qibin Chen, Hongxia Yang, and Jie Tang. 2019. Cognitive graph for multi-hop reading comprehension at scale. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_, pages 2694-2703, Florence, Italy. Association for Computational Linguistics.
* Dua et al. (2019) Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_, pages 2368-2378, Minneapolis, Minnesota. Association for Computational Linguistics.
* Feng et al. (2020) Yanlin Feng, Xinyue Chen, Bill Yuchen Lin, Peifeng Wang, Jun Yan, and Xiang Ren. 2020. Scalable multi-hop relational reasoning for knowledge-aware question answering.
* Geva et al. (2020) Mor Geva, Ankit Gupta, and Jonathan Berant. 2020. Injecting numerical reasoning skills into language models. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_, pages 946-958, Online. Association for Computational Linguistics.
* Geva et al. (2021) Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. _Transactions of the Association for Computational Linguistics_, 9:346-361.
* He et al. (2022) Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, and Graham Neubig. 2022. Towards a unified view of parameter-efficient transfer learning. In _International Conference on Learning Representations_.
* He et al. (2021) Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Deberta: Decoding-enhanced bert with disentangled attention. In _International Conference on Learning Representations_.
* Holtzman et al. (2021) Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, and Luke Zettlemoyer. 2021. Surface form competition: Why the highest probability answer isn't always right.
* Houlsby et al. (2019) Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In _International Conference on Machine Learning_, pages 2790-2799. PMLR.
* Hu et al. (2021) Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. _arXiv preprint arXiv:2106.09685_.
* Hu et al. (2019a) Minghao Hu, Yuxing Peng, Zhen Huang, and Dongsheng Li. 2019a. A multi-type multi-span network for reading comprehension that requires discrete reasoning. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_, pages 1596-1606.
* Hu et al. (2019b) Minghao Hu, Yuxing Peng, Zhen Huang, and Dongsheng Li. 2019b. A multi-type multi-span network for reading comprehension that requires discrete reasoning. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_, pages 1596-1606, Hong Kong, China. Association for Computational Linguistics.
* Jin et al. (2022) Woojeong Jin, Yu Cheng, Yelong Shen, Weizhu Chen, and Xiang Ren. 2022. A good prompt is worth millions of parameters: Low-resource prompt-based learning for vision-language models. In _Proceedings of the 60th Annual Meetingof the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 2763-2775, Dublin, Ireland. Association for Computational Linguistics.
* Kojima et al. (2022) Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners.
* Koncel-Kedziorski et al. (2015) Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. 2015. Parsing algebraic word problems into equations. _Transactions of the Association for Computational Linguistics_, 3:585-597.
* Kundu et al. (2019) Souvik Kundu, Tushar Khot, Ashish Sabharwal, and Peter Clark. 2019. Exploiting explicit paths for multi-hop reading comprehension. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_, pages 2737-2747, Florence, Italy. Association for Computational Linguistics.
* Lampinen et al. (2022) Andrew K Lampinen, Ishita Dasgupta, Stephanie CY Chan, Kory Matthewson, Michael Henry Tessler, Antonia Creswell, James L McClelland, Jane X Wang, and Felix Hill. 2022. Can language models learn from explanations in context? _arXiv preprint arXiv:2204.02329_.
* Le Scao and Rush (2021) Teven Le Scao and Alexander M Rush. 2021. How many data points is a prompt worth? In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_, pages 2627-2636.
* Lin et al. (2019) Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019. KagNet: Knowledge-aware graph networks for commonsense reasoning. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_, pages 2829-2839, Hong Kong, China. Association for Computational Linguistics.
* Liu et al. (2022) Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022. What makes good in-context examples for GPT-3? In _Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures_, pages 100-114, Dublin, Ireland and Online. Association for Computational Linguistics.
* Liu et al. (2020) Jian Liu, Leyang Cui, Hanneng Liu, Dandan Huang, Yile Wang, and Yue Zhang. 2020. Logiqa: A challenge dataset for machine reading comprehension with logical reasoning.
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_.
* Lu et al. (2021) Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2021. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity.
* Miao et al. (2020) Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su. 2020. A diverse corpus for evaluating and developing english math word problem solvers. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_, pages 975-984.
* Mihaylov and Frank (2018) Todor Mihaylov and Anette Frank. 2018. Knowledge-eadeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 821-832, Melbourne, Australia. Association for Computational Linguistics.
* Min et al. (2021) Sewon Min, Mike Lewis, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2021. Metaicl: Learning to learn in context.
* Min et al. (2022) Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work?
* Patel et al. (2021) Arkil Patel, Satwik Bhattacharya, and Navin Goyal. 2021. Are nlp models really able to solve simple math word problems? In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_, pages 2080-2094.
Xinyu Pi, Qian Liu, Bei Chen, Morteza Ziyadi, Zeqi Lin, Yan Gao, Qiang Fu, Jian-Guang Lou, and Weizhu Chen. 2022. Reasoning like program executors. _arXiv preprint arXiv:2201.11473_.
* Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. _OpenAI blog_, 1(8):9.
* Roy and Roth (2015) Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_, pages 1743-1752.
* Rubin et al. (2021) Ohad Rubin, Jonathan Herzig, and Jonathan Berant. 2021. Learning to retrieve prompts for in-context learning.
* Shen et al. (2021) Jianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin Jiang, Ming Zhang, and Qun Liu. 2021. Generate & rank: A multi-task framework for math word problems. In _Findings of the Association for Computational Linguistics: EMNLP 2021_, pages 2269-2279.
* Sinha et al. (2019) Koustuv Sinha, Shagun Sodhani, Jin Dong, Joelle Pineau, and William L Hamilton. 2019. Clutrr: A diagnostic benchmark for inductive reasoning from text. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_, pages 4506-4515.
* Talmor et al. (2019) Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. Commonsenseqa: A question answering challenge targeting commonsense knowledge. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_, pages 4149-4158.
* Wang et al. (2022a) Siyuan Wang, Wanjun Zhong, Duyu Tang, Zhongyu Wei, Zhihao Fan, Daxin Jiang, Ming Zhou, and Nan Duan. 2022a. Logic-driven context extension and data augmentation for logical reasoning of text. In _Findings of the Association for Computational Linguistics: ACL 2022_, pages 1619-1629, Dublin, Ireland. Association for Computational Linguistics.
* Wang et al. (2019) Xiaoyan Wang, edu Kapanipathi, Ryan Musa, Mo Yu, Kartik Talamadupula, Ibrahim Abdelaziz, Maria Chang, Achille Fokoue, Bassem Makni, Nicholas Mattei, and Michael Witbrock. 2019. Improving natural language inference using external knowledge in the science questions domain. In _Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence_, AAAI'19/IAAI'19/EAAI'19. AAAI Press.
* Wang et al. (2022b) Xiting Wang, Kunpeng Liu, Dongjie Wang, Le Wu, Yanjie Fu, and Xing Xie. 2022b. Multi-level recommendation reasoning over knowledge graphs with reinforcement learning. In _The Web Conference 2022_.
* Wang et al. (2022c) Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022c. Self-consistency improves chain of thought reasoning in language models.
* Wei et al. (2022) Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. _arXiv preprint arXiv:2201.11903_.
* Xu et al. (2021) Yichong Xu, Chenguang Zhu, Shuohang Wang, Siqi Sun, Hao Cheng, Xiaodong Liu, Jianfeng Gao, Pengcheng He, Michael Zeng, and Xuedong Huang. 2021. Human parity on commonsenseqa: Augmenting self-attention with external attention. _arXiv preprint arXiv:2112.03254_.
* Yang et al. (2018) Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_, pages 2369-2380, Brussels, Belgium. Association for Computational Linguistics.
* Yoran et al. (2022) Ori Yoran, Alon Talmor, and Jonathan Berant. 2022. Turning tables: Generating examples from semi-structured tables for endowing language models with reasoning skills. In _Proceedings of the 60thAnnual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 6016-6031, Dublin, Ireland. Association for Computational Linguistics.
* Yu et al. (2020) Weihao Yu, Zihang Jiang, Yanfei Dong, and Jiashi Feng. 2020. Reclor: A reading comprehension dataset requiring logical reasoning.
* Zelikman et al. (2022) Eric Zelikman, Yuhuai Wu, and Noah D Goodman. 2022. Star: Bootstrapping reasoning with reasoning. _arXiv preprint arXiv:2203.14465_.
* Zellers et al. (2018) Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversarial dataset for grounded commonsense inference. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_, pages 93-104, Brussels, Belgium. Association for Computational Linguistics.
* Zhao et al. (2021) Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models.
* Zhou et al. (2022) Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. 2022. Least-to-most prompting enables complex reasoning in large language models.
* Zhu et al. (2021) Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and Tat-Seng Chua. 2021. Tat-qa: A question answering benchmark on a hybrid of tabular and textual content in finance.
This is the Appendix for the paper: "On the Advance of Making Language Models Better Reasoners".
## Appendix A Preliminaries
Prompting.Prompting means prepending a few exemplars to the task input \(\mathbf{x}\) and generating the output \(\mathbf{y}\) from the pretrained language model:
\[p(\mathbf{y}|C,\mathbf{x})=\prod_{t=1}^{|\mathbf{y}|}p_{\text{LM}}(y_{t}|C, \mathbf{x},y_{<t}), \tag{3}\]
where \(C\) is the concatenation of \(K\) exemplars:
\[C=(\overline{\mathbf{x}}_{1},\overline{\mathbf{y}}_{1});(\overline{\mathbf{x }}_{2},\overline{\mathbf{y}}_{2});...;(\overline{\mathbf{x}}_{K},\overline{ \mathbf{y}}_{K}). \tag{4}\]
We denote **prompt** as the concatenation of the exemplars \(C\) and the input \(\mathbf{x}\).
Reasoning Paths.For reasoning tasks that aim to generate an answer \(\mathbf{y}\) for a question \(\mathbf{x}\), Wei et al. (2022) proposed the insertion of a reasoning path \(\mathbf{z}\) before generating the answer \(\mathbf{y}\):
\[C^{\prime}=(\overline{\mathbf{x}}_{1},\overline{\mathbf{z}}_{1},\overline{ \mathbf{y}}_{1});...;(\overline{\mathbf{x}}_{K},\overline{\mathbf{z}}_{K}, \overline{\mathbf{y}}_{K}), \tag{5}\]
where \(\mathbf{z}_{i}\) is a text **reasoning path** of how the answer \(\mathbf{y}_{i}\) is reasoned step-by-step for question \(\mathbf{x}_{i}\).
Then, during inference, a reasoning path \(\mathbf{z}\) will be generated before the answer \(\mathbf{y}\):
\[p(\mathbf{y}|C^{\prime},\mathbf{x})=p(\mathbf{z}|C^{\prime},\mathbf{x})\cdot p (\mathbf{y}|C^{\prime},\mathbf{x},\mathbf{z}). \tag{6}\]
Figure 10 demonstrates this idea in arithmetic reasoning (GSM8K), and Table 7 demonstrates this idea in commonsense reasoning (StrategyQA) and inductive reasoning (CLUTRR).
## Appendix B Boosting Reasoning Paths via Self-Teaching
In this section, we first introduce self-teaching, the method we use to construct a larger exemplar base when the original dataset does not contain enough data with well-annotated reasoning paths (Appendix B.1). We then discuss the noise issue when facing multiple-choice tasks (Appendix B.2).
### Self Teaching
A critical issue of DiVeRSe is **how to provide diverse prompts**.6 Supposing that there is an exemplar base \(E\), we can sample \(K\) exemplars from it to construct a prompt, and repeat this \(M_{1}\) times independently to construct \(M_{1}\) prompts with diverse exemplars.
Footnote 6: Wang et al. (2022c) tried an ensemble-based approach, i.e., to permutate exemplars in the original prompt. However, this strategy does not increase diversity in terms of exemplars.
For scenarios that do not have sufficient exemplars (i.e., \(|E|<K*M_{1}\)), we propose to **bootstrap the diversity of prompts by "self-teaching"**, i.e., generating pseudo reasoning paths from a few exemplars and some \(\langle\text{question},\text{answer}\rangle\) pairs without reasoning paths.7 Suppose that \(D\) is a dataset without reasoning paths, consisting of
\begin{table}
\begin{tabular}{l} \hline
**[StrategyQA]** Yes or no: Could a llam birth twice \\ during War in Vietnam (1945-46)? \(\triangleright\)_The War in Vietnam was 6 months. The gestation period for a llama is 11 months. So a llama could not give birth twice during the War in Vietnam. The answer is **no**._ \\ \hline
**[CLUTRR]** Roy was eating lunch with his son John and his wife Mary. What kind of relative is John to Mary? \(\triangleright\)_ \\ _John is the son of Roy. Roy is the husband of Mary. Thus, John is the son of Mary. The answer is **son**._ \\ \hline \hline \end{tabular}
\end{table}
Table 7: Besides arithmetic reasoning, we also investigate commonsense and inductive reasoning.
Figure 10: Prompting large language models to generate different reasoning paths, then selecting the final answer via majority voting (Wang et al., 2022c).
\((\mathbf{x},\mathbf{y}^{*})\) pairs. Given the small exemplar base \(E\), for each \((\mathbf{x},\mathbf{y}^{*})\in D\), we can use prompting to generate a reasoning path \(\mathbf{z}\) and the predicted answer \(\mathbf{y}\). We define the pseudo exemplar base \(E^{\prime}\) as:
\[E^{\prime}=\{(\mathbf{x},\mathbf{z},\mathbf{y})|(\mathbf{x},\mathbf{y}^{*})\in D,\mathbf{y}=\mathbf{y}^{*}\}, \tag{7}\]
then \(E\cup E^{\prime}\) can be regarded as the new exemplar base for generating diverse prompts.
### Noises in Multiple Choice Tasks
In our experimental setup, StrategyQA and CommonsenseQA are more challenging than other tasks, as they use pseudo exemplars generated through "self-teaching" (Appendix B.1).
"Self-teaching" may lead to bad exemplars, whose reasoning paths are invalid but happen to yield answers coinciding with the ground truth. Questions in StrategyQA/CommonsenseQA are two-choice/four-choice questions, respectively. Therefore, such noise would be more serious in StrategyQA than in CommonsenseQA. This somehow explains why DiVeRSe can achieve comparable performance (\(-0.8\%\)) as the PaLM-based SOTA on CommonsenseQA, while it sees a \(3.0\%\) performance decline to PaLM on StrategyQA, which has only two choices. In other words, it is easier for StrategyQA to yield a right answer but a misleading reasoning path.
## Appendix C Data Statistics
Table 8 shows the reasoning benchmarks we use in this paper with examples. We use the same test sets as Wei et al. (2022) for GSM8K, AsDiv, MultiArith, SVAMP, SingleEq, and CommonsenseQA.
For StrategyQA, there are \(2,290\) test cases (i.e., questions paired with TRUE/FALSE labels), but there is no other case that can be leveraged by DiVeRSe to construct diverse exemplars (as introduced in Section 2.1). To address this problem, we randomly divide these \(2,290\) test cases into two equal parts (denoted as \(T_{1}\) and \(T_{2}\)). For each Di
\begin{table}
\begin{tabular}{l c l} \hline \hline Dataset & \(N\) & Example Question \\ \hline GSM8K & 1319 & James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week? \\ \hline AsDiv & 2096 & Seven red apples and two green apples are in the basket. How many apples are in the basket? \\ \hline MultiArith & 600 & The school cafeteria ordered 42 red apples and 7 green apples for students lunches. But, if only 9 students wanted fruit, how many extra did the cafeteria end up with? \\ \hline SVAMP & 1000 & Paco had 26 salty cookies and 17 sweet cookies. He ate 14 sweet cookies and 9 salty cookies. How many salty cookies did Paco have left? \\ \hline SingleEq & 508 & Terez has 44 cows on his farm. 50 percent of the cows are female, and 50 percent of the females are pregnant. How many pregnant female cows does Terez have? \\ \hline CommonsenseQA & 3387 & Sammy wanted to go to where the people were. Where might he go? Options: (a) race track (b) populated areas (c) desert (d) apartment (e) roadblock \\ \hline StrategyQA & 2280 & Could you go to New York Public Library and the Six Flags Great Escape in the same day? \\ \hline CLUTRR & 447 & Kelly and her mother Ernest made breakfast together. Constance and her husband Ernest wanted a child badly What kind of relative is Kelly to Constance? The possible relationships are: sister, son, aunt, granddaughter, father, grandfather, grandmother, mother-in-law, uncle, niece, mother, brother, daughter, nephew, grandson, son-in-law, father-in-law, daughter-in-law. \\ \hline \hline \end{tabular}
\end{table}
Table 8: Reasoning benchmarks we use in this paper with examples. \(N\) means the number of test cases.
[MISSING_PAGE_FAIL:16]
[MISSING_PAGE_EMPTY:17] | # Making Large Language Models Better Reasoners with Step-Aware Verifier
Yifei Li\({}^{1,2}\); Zeqi Lin\({}^{2}\), Shizhuo Zhang\({}^{2}\), Qiang Fu\({}^{2}\), Bei Chen\({}^{2}\),
Jian-Guang Lou\({}^{2}\), Weizhu Chen\({}^{2}\)
\({}^{1}\) National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University
\({}^{2}\) Microsoft Corporation
{yifeili, zeqi.lin, v-shizzhang, qifu, bei.chen, jlou, wzchen}@microsoft.com
liyifei@stu.pku.edu.cn
Work was done during an internship at Microsoft Research Asia.
###### Abstract
Few-shot learning is a challenging task that requires language models to generalize from limited examples. Large language models like GPT-3 and PaLM have made impressive progress in this area, but they still face difficulties in reasoning tasks such as GSM8K, a benchmark for arithmetic problems. To improve their reasoning skills, previous work has proposed to guide the language model with prompts that elicit a series of reasoning steps before giving the final answer, achieving a significant improvement on GSM8K from \(17.9\%\) to \(58.1\%\) in problem-solving rate. In this paper, we present DiVeRSe (Diverse Verifier on Reasoning Step), a novel approach that further enhances the reasoning capability of language models. DiVeRSe has three main components: first, it generates diverse prompts to explore different reasoning paths for the same question; second, it uses a verifier to filter out incorrect answers based on a weighted voting scheme; and third, it verifies each reasoning step individually instead of the whole chain. We evaluate DiVeRSe on the latest language model _code-davinci-002_ and show that it achieves new state-of-the-art results on six of eight reasoning benchmarks (e.g., GSM8K \(74.4\%\to 83.2\%\)).
## 1 Introduction
Large pretrained language models (PLMs) have shown remarkable performance on various natural language processing tasks, either by few-shot learning with prompts (Radford et al., 2019; Le Scao and Rush, 2021; Jin et al., 2022) or by fine-tuning (Houlsby et al., 2019; Hu et al., 2021; He et al., 2022). However, despite the increasing size and capacity of PLMs such as GPT-3 with 175B parameters (Brown et al., 2020) and PaLM with 540B parameters (Chowdhery et al., 2022), their reasoning abilities are still limited and often require multiple steps to produce correct answers, especially for tasks involving arithmetic, commonsense, or inductive reasoning (Cobbe et al., 2021).
Recent works (Wei et al., 2022; Zhou et al., 2022; Kojima et al., 2022; Lampinen et al., 2022) have demonstrated that PLMs possess some latent reasoning capabilities, but they need carefully designed prompts to activate them. For instance, Wei et al. (2022) proposed chain-of-thought reasoning, which inserts multi-step reasoning paths before generating the final answers, and achieved significant improvement on the GSM8K arithmetic benchmark (Cobbe et al., 2021). Wang et al. (2022) further introduced a voting mechanism to select the most consistent answer among different reasoning paths, and achieved state-of-the-art results on several reasoning benchmarks using the PaLM model (Chowdhery et al., 2022). Building on these successes, this paper continues this line of research and advances the reasoning capabilities of PLMs in three aspects, as illustrated in Figure 1.
First, we propose to increase the diversity of reasoning paths by not only sampling from a single prompt, but also varying the prompt itself. We hypothesize that different prompts can elicit different ways of thinking, while the correct answer should be robust to these variations. Second, we propose to use a verifier to score the quality of each reasoning path and guide the voting mechanism. We argue that not all reasoning paths are equally good
Figure 1: Our proposed method, DiVeRSe (**Diverse Verifier on Reasoning Step**).
or reliable, and some may contain errors or inconsistencies that can be detected by the verifier. Third, we propose to assign a fine-grained label to each step of the reasoning path and use a step-aware verifier to attribute the correctness or wrongness of the final answer to each step. We conjecture that some steps may be correct but followed by wrong steps or vice versa, and identifying these cases can help diagnose and improve the reasoning process.
We name our method as DiVeRSe (diverse verifier on reasoning step) and evaluate it on eight reasoning benchmarks that require different types of reasoning skills. We use three OpenAI PLMs (_davinci_, _text-davinci-002_, and _code-davinci-002_) and compare our results with recent state-of-the-art methods. We find that DiVeRSe can consistently and significantly improve the performance of PLMs on these tasks, and achieve new state-of-the-art results on six of them1: GSM8K (\(74.4\%\to 83.2\%\)), AsDiv (\(81.9\%\to 88.7\%\)), MultiArith (\(99.3\%\to 99.8\%\)), SVAMP(\(86.6\%\to 87.0\%\)), SingleEq (\(79.5\%\to 94.9\%\)), and CLUTRR (\(67.0\%\to 95.9\%\)).
Footnote 1: Most of the previous SOTA results were achieved by self-consistency on PaLM-540BChowdhery et al. (2022).
Our data is publicly available at [https://github.com/microsoft/DiVeRSe](https://github.com/microsoft/DiVeRSe).
## 2 Diverse Verifier on Reasoning Step
Figure 1 shows the overview of DiVeRSe. The key insights are three-fold: (1) leveraging _diverse prompts_ to induce more diverse reasoning paths from the language models (Section 2.1); (2) training a _voting verifier_ to better derive the final answers from multiple reasoning paths (Section 2.2); (3) leveraging _step correctness_ to further boost the voting verifier (Section 2.3).
### Diverse Prompts
To reason effectively, it is beneficial to explore diverse reasoning paths, following the idea that "_All Roads lead to Rome_". Wang et al. (2022) proposed to generate various reasoning paths from language models by _sampling decoding_. However, their method relies on a fixed set of exemplars for all prompts, which may introduce bias and limit the diversity of the generated reasoning paths. To address this issue, we randomly select \(M_{1}\) different prompts for each question, and then sample \(M_{2}\) reasoning paths for each prompt using sampling decoding. This way, we obtain \(M=M_{1}\times M_{2}\) diverse reasoning paths for each question.2
Footnote 2: Our main experiments use \(M_{1}=5\) and \(M_{2}=20\).
### Voting Verifier
Verifier.The verifier takes a question and a candidate reasoning path as input, and outputs the probability that the reasoning path leads to the correct answer. We use _deberta-v3-large_He et al. (2021) as the backbone model, with a small scalar head that outputs predictions on the \([\mathbf{CLS}]\) token.
Training the verifier.For each training question, we generate multiple candidate reasoning paths using chain-of-thought reasoning. We regard the reasoning paths that match the ground truth final answer as positive, and the others as negative.
Voting Verifier.Wang et al. (2022) use _majority voting_ to aggregate the predictions of different reasoning paths. This method may fail when the majority of the reasoning paths are misled, while the minority of the reasoning paths are reasonable. We propose _voting verifier_, which leverages both _voting_ and _verifier_:
\[\hat{\mathbf{y}}=\operatorname*{arg\,max}_{\mathbf{y}}\sum_{i=1}^{M}\mathbbm{1 }_{\mathbf{y}_{i}=\mathbf{y}}\cdot f(\mathbf{x}_{i},\mathbf{z}_{i},\mathbf{ y}_{i}), \tag{1}\]
where \(\mathbbm{1}_{\mathbf{y}_{i}=\mathbf{y}}\) is an indicator function that returns 1 (or 0) if \(\mathbf{y}_{i}=\mathbf{y}\) (or not), and \(f(\cdot)\) is the probability produced by the verifier.
### Step-aware Voting Verifier
Each reasoning path consists of several steps. We hypothesize that not all the steps in an incorrect
Figure 2: Chain-of-thought reasoning for GSM8K math word problem. The prompt is colored black and the reasoning path produced by the language model is colored teal. This reasoning path contains two reasoning steps.
reasoning path are equally wrong, and some steps may still be useful for reasoning. To exploit this, we extend the voting verifier to a step-aware voting verifier by introducing an extended loss function:
\[\begin{split}\mathcal{L}=\mathcal{L}_{0}+\alpha\cdot\mathcal{L}_{1}, \\ \mathcal{L}_{1}=\sum_{i=1}^{|\hat{D}|}\sum_{j=1}^{|S_{i}|}\!\! \text{BCE}(\text{label}_{i,j},f^{\prime}(\text{input}_{i},j)).\end{split} \tag{2}\]
\(\alpha\) is a hyperparameter to balance the original loss \(\mathcal{L}_{0}\) and the step-level auxiliary loss \(\mathcal{L}_{1}\); \(S_{i,1},S_{i,2},...,S_{i,|S_{i}|}\) are the steps in \(\mathbf{z}_{i}\); \(\text{label}_{i,j}\) indicates whether \(S_{i,j}\) is correct or not; \(f^{\prime}(\text{input}_{i},j)\) represents the probability of the positive label for \(S_{i,j}\).3
Footnote 3: Specifically, \(f^{\prime}(\text{input}_{i},j)\) is predicted from the hidden state of the last token of \(S_{i,j}\) in deberta-v3-large, similar to token classification tasks.
**To obtain the step-level labels** (i.e., \(\text{label}_{i,j}\)) for negative training data with wrong answers, we design an algorithm that compares intermediate results among steps in positive/negative reasoning paths. Figure 3 illustrates this algorithm. This algorithm can not only work on math word problems, but also generalize to other reasoning tasks: we use an off-the-shelf natural language inference model, _roberta-large-mnli_(Liu et al., 2019), to check whether two reasoning steps are semantically equivalent or not. Given a reasoning step, if we cannot find any semantically equivalent step in the positive reasoning paths, we label it and all the subsequent steps as negative steps.
## 3 Experimental Setup
### Reasoning Tasks
Arithmetic Reasoning.Following Wang et al. (2022c), we use AsDiv Miao et al. (2020), SingleEq Koncel-Kedziorski et al. (2015), MultiArith Roy and Roth (2015), SVAMP Patel et al. (2021), and GSM8K Cobbe et al. (2021).
Commonsense Reasoning.Following Wang et al. (2022c), we use CommonsenseQA Talmor et al. (2019) and StrategyQA Geva et al. (2021).
Inductive Reasoning.We use CLUTRR Sinha et al. (2019), a diagnostic benchmark for inductive reasoning, requiring inferring kinship relations between characters in short stories.
### Details
Language Models.We use three OpenAI language models: _davinci_, _text-davinci-002_ and _code-davinci-002_. We use the default parameters except a temperature of \(0.5\) in sampling.
Exemplars.For arithmetic/commonsense/inductive reasoning, each prompt contains \(5/7/7\) exemplars. For DiVeRSe, each question has \(5\) different prompts, and \(20\) reasoning paths are sampled from the language model for each prompt. For arithmetic reasoning, the exemplars are randomly sampled from the training dataset of GSM8K; for CLUTRR, the exemplars are sampled from its training dataset, with reasoning paths synthesized by handcraft rules (detailed settings for CLUTRR are listed in Appendix D); for StrategyQA and CommonsenseQA, their original datasets do not contain enough exemplars with well-annotated reasoning paths, so we construct \(1,000\) pseudo exemplars by "self-teaching" (the approach and the noise issue are discussed in Appendix B) from "seed" exemplars provided by Wei et al. (2022).
Training Datasets.For each task, we sample \(1,000\)\(\langle\text{question},\text{answer}\rangle\) pairs from the training dataset to train the verifier.
Verifier.We fine-tune _deberta-v3-large_(He et al., 2021) with learning rate \(1\times 10^{-5}\) and batch size \(128\). For the step-aware verifier, we select the best \(\alpha\) among \(0.0/0.1/0.2/0.3\).
Figure 3: How step-level labels are extracted. This figure shows four reasoning paths for a math word problem: the first two are positive and the bottom two are negative. The path \(7\to 9\to 18\) means that the first step calculates 7, the second step calculates 9, and the third step calculates the final answer 18. For the last path, the third step (which calculates \(8\)) has never occurred in any positive reasoning paths, thus we regard this step and all steps after it as negative steps.
## 4 Main Results
Table 1 shows the overall experimental results. We mainly compare DiVeRSe with two baselines: (1) greedily decoding a single reasoning path (Wei et al., 2022), referred to as _Greedy Decode_; (2) sampling \(100\) reasoning paths, then select the final answer via majority voting (Wang et al., 2022c), referred to as _Self-Consistency_.
### Effectiveness
Experimental results clearly demonstrate that DiVeRSe can bring significant and consistent improvements over recent strong baselines. The improvements are across different models (_davinci_, _text-davinci-002_ and _code-davinci-002_) as well as different reasoning skills (eight tasks in three reasoning skills). Taking GSM8K as an example, compared to _Greedy Decoding_ and _Self-Consistency_, DiVeRSe brings improvements of \(22.2\%/12.0\%\) on _davinci_, \(33.1\%/12.0\%\) on _text-davinci-002_, and \(27.0\%/5.6\%\) on _code-davinci-002_. Compared to _Self-Consistency_, DiVeRSe achieves average improvements of \(5.6\%/5.1\%/54.3\%\) on the three reasoning skills, respectively.
### Comparing to Previous SOTAs
In Table 1, we also compare DiVeRSe with: (1) previous SOTA results based on fine-tuning; (2) recent SOTA results (Wei et al., 2022) based on PaLM (Chowdhery et al., 2022), a gigantic language model with 540 billion parameters.4
Footnote 4: DiVeRSe can also be applied to PaLM, but PaLM is not publicly available.
On all the five arithmetic reasoning tasks, DiVeRSe (with _code-davinci-002_) achieves new SOTA results, with an average improvement of \(6.2\%\). On the two commonsense reasoning tasks, the performance of DiVeRSe is slightly lower (\(-1.9\%\)) than that of PaLM-based self-consistency. We speculate that the reason might be: these two commonsense reasoning tasks are multiple-choice tasks rather than open-ended generation tasks, re
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Method & GSM8K & AdvInv & MultiAvitch & SVAMP & Singlefa & CommonsenceQA & StrategyQA & CLUTRR \\ \hline Precision SOTA (fine-tuning) & 57\({}^{\text{a}}\) & 75.3\({}^{\text{b}}\) & 60.5\({}^{\text{c}}\) & 57.4\({}^{\text{d}}\) & 32.5\({}^{\text{e}}\) & 91.2\({}^{\text{e}}\) & 73.9\({}^{\text{g}}\) & 67.0\({}^{\text{h}}\) \\
sulting in more false-positive exemplars in the pseudo exemplar base (Details will be discussed in Section B.2). Regarding inductive reasoning, D1-VeRSe achieves a surprisingly good performance of \(95.9\%\) on the CLUTRR task, outperforming (\(+28.9\%\)) previous SOTA result with fine-tuning (Sinha et al., 2019).5
Footnote 5: Sinha et al. (2019) also introduced a method with \(100\%\) accuracy. We do not take it into the comparison, as this method requires a domain-specific system with complicated rules to extract a knowledge graph for each input text.
## 5 Case Study
Table 2 shows an example of step-level scores given by the step-aware verifier. Steps in the correct reasoning path have relatively high scores, while the scores in the wrong reasoning path show where the path starts to be wrong. This indicates that besides improving the performance, the step-aware verifier can also bring interpretability to show the step-level correctness. We also show some extra examples of majority-voting in Table 10.
## 6 Analysis
We also conduct ablation experiments and analysis to investigate the keys to the success of D1VeRSe.
### The Effectiveness of Diverse Prompts
By diversifying both prompts and reasoning paths (\(\langle M_{1}=5,M_{2}=20\rangle\)), we consistently improve performance over the sampling decoding approach (\(\langle M_{1}=1,M_{2}=100\rangle\)) of Wang et al. (2022c), as shown in Table 3. Both methods use majority voting. Table 4 further reveals that neither only using diverse prompts nor only using sampling is optimal. In other words, _the best performance is achieved by combining diverse prompts and sampling_. Moreover, Figure 4 demonstrates that _diverse prompts lead to more diverse reasoning paths_. We hypothesize that this diversity contributes to the performance improvement by: (1) making correct results more distinguishable from varied errors during inference; and (2) providing more diverse negative samples for enhancing the verifier's generalizability during training.
### The Effectiveness of Voting Verifier
We compare three algorithms to conclude the agreement from diverse reasoning paths: majority voting, verifier, and voting verifier. Table 5 shows the results. _Compared to majority voting, our voting verifier can significantly and consistently boost reasoning performance across different tasks and different language models_. Verifier without voting often outperforms majority voting, but extending it to voting verifier can further boost the performance.
\begin{table}
\begin{tabular}{l c} \hline \hline \(\langle M_{1},M_{2}\rangle\) & GSM8K \\ \hline \(M_{1}=1,M_{2}=100\) & 76.7 \\ \(M_{1}=5,M_{2}=20\) & **80.0** \\ \(M_{1}=10,M_{2}=10\) & 79.8 \\ \(M_{1}=100,M_{2}=1\) & 73.0 \\ \hline \hline \end{tabular}
\end{table}
Table 4: GSM8K majority voting results for different \(\langle M_{1},M_{2}\rangle\) settings on _code-davinci-002_.
Figure 4: Diverse prompts increase the diversity of GSM8K reasoning paths and their final answers. This is beneficial for the voting verifier. Left: the average number of distinct reasoning paths per question (we consider two reasoning paths to be the same if they have the same intermediate result chain as shown in Figure 3). Right: the average number of distinct final answers per question.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & GSM8K & CQA & CLUTRR \\ \hline \multicolumn{4}{l}{davicni:} \\ \(M_{1}=1,M_{2}=100\) & 18.9 & 57.4 & 42.5 \\ \(M_{1}=5,M_{2}=20\) & **21.3** & **57.5** & **45.9** \\ \hline \multicolumn{4}{l}{text-davicni-002:} \\ \(M_{1}=1,M_{2}=100\) & 58.2 & 72.9 & 34.9 \\ \(M_{1}=5,M_{2}=20\) & **61.3** & **77.3** & **35.6** \\ \hline \multicolumn{4}{l}{code-davicni-002:} \\ \(M_{1}=1,M_{2}=100\) & 76.7 & 77.3 & 35.6 \\ \(M_{1}=5,M_{2}=20\) & **80.0** & **78.8** & **43.8** \\ \hline \hline \end{tabular}
\end{table}
Table 3: The effectiveness of diverse prompts (\(\langle 5,20\rangle\)) compared to pure sampling decoding (Wang et al., 2022c), under majority voting.
### The Effectiveness of Step-aware Verifier
We evaluate the impact of incorporating step-level information into the voting verifier of DiVeRSe. Table 6 shows the performance of DiVeRSe with and without the step-aware mechanism on both the GSM8K and the CommonsenseQA datasets. We find that _using the step-aware verifier improves the performance in most of the experiments_. The only exception is _code-davinci-002_ on GSM8K, where the step-aware verifier slightly lowers the performance. We hypothesize that _code-davinci-002_ is more capable of generating high-quality reasoning paths, and thus does not benefit much from the step-level information.
**Detailed Human Evaluation of Reasoning Steps.** We further analyze the quality of generated reasoning steps, by asking human annotators to judge whether the GSM8K reasoning steps produced by DiVeRSe (with/without step-aware mechanism) are good or not. Here "good" means not only correct formulas and calculation results but also textual fluency and logical coherence.
We further examine the quality of the reasoning steps generated by DiVeRSe (with/without step-aware mechanism) for GSM8K, by asking human annotators to rate them based on correctness, fluency, and coherence. For each test question, we compare three reasoning paths produced by _code-davinci-002_: the one with the highest verifier score, the one with the highest step-aware verifier score, and a randomly chosen one. The annotators (master students) label any incorrect or unsatisfactory reasoning steps in each path (single-blind) and explain why. We collect annotations for 200 test questions, half of which have correct final answers from all three paths, and half of which have incorrect final answers from all three paths.
We find that **all the reasoning paths with correct final answers are also correct in every intermediate step**, which shows that _code-davinci-002_ can reliably generate accurate reasoning steps, not just lucky guesses. However, we also find that **many of the correct reasoning paths have unnecessary steps**. Figure 5(a) shows that \(40\%\) of the random paths have redundant steps, and the verifier can lower this percentage to \(31\%\). We also find that **the step-aware verifier can further eliminate redundant reasoning steps** from \(31\%\) to \(20\%\).
Furthermore, for the incorrect reasoning paths, we find that **the step-aware mechanism helps produce more correct steps before making mistakes**. For each failed test question, we compare the number of correct steps in the path with the highest verifier score and the path with the highest step-aware verifier score (by human evaluation). Figure 5(b)
Figure 5: Human evaluation on GSM8K shows the effectiveness of the step-aware mechanism for verifier.
Figure 6: The distribution of error types in incorrect reasoning steps.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & GSM8K & CQA & CLUTRR \\ \hline \hline \multicolumn{3}{l}{davinci:} \\ \hline Voting & 21.3 & 57.4 & 45.9 \\ Verifier & 27.0 & 74.1 & **93.2** \\ \hline \multicolumn{3}{l}{Voting Verifier} & **30.6** & **75.0** & 92.5 \\ \hline \multicolumn{3}{l}{text-davinci-002:} \\ \hline Voting & 61.3 & 77.3 & 35.6 \\ Verifier & 62.7 & 77.9 & **93.8** \\ \hline \multicolumn{3}{l}{Voting Verifier} & **68.9** & **79.2** & **93.8** \\ \hline \multicolumn{3}{l}{code-davinci-002:} \\ \hline Voting & 80.0 & 75.4 & 43.8 \\ Verifier & 65.9 & **78.8** & **95.9** \\ \hline \multicolumn{3}{l}{Voting Verifier} & **82.3** & **78.8** & **95.9** \\ \hline \hline \end{tabular}
\end{table}
Table 5: The effectiveness of voting verifier. All experiments in this table use \(\langle M_{1},M_{2}\rangle=\langle 5,20\rangle\).
shows that for \(33\%\)/\(17\%\) of the failed test cases, the step-aware verifier generates more/fewer correct steps than the verifier without the step-aware mechanism.
Step Error Types.Figure 6 shows the distribution of error types in the incorrect reasoning steps. We see that \(95\%\) of the errors are caused by incorrect formulations (i.e., using wrong intermediate results or operators and generating invalid formulas, which lead to incorrect answers). We also see that, although _code-davinci-002_ often makes division calculation errors (e.g., \(10/3=3\)), both the verifier and the step-aware verifier can effectively assign low scores to such paths, thus improving the performance.
### How Many Diverse Outputs Do We Need?
Figure 7 shows the accuracy at different \(M\) values, where \(M\) is the number of reasoning paths sampled from the \(100\) generated paths for each question. We observe that: (1) the accuracy increases with more reasoning paths, but the improvement becomes marginal at \(M\geq 50\); (2) DiVeRSe outperforms self-consistency significantly and consistently at different \(M\) values.
### How Many Training Data Do We Need?
DiVeRSe requires a dataset with reasoning paths for training the verifier. Figure 8 shows how the size of this dataset affects the performance. We observe that: the performance is only reduced by about \(2\%\), even if the size of training data is cut by \(75\%\) (from \(1,000\) to \(250\)). With the same reasoning paths, voting verifier performs better than majority voting, while verifier without voting causes significant performance drops.
### The Impact of the Number of Exemplars
We conduct experiments for \(k=3/5/8\) (\(k\) is the number of exemplars used in each prompt) on GSM8K. Figure 9 shows the results. We observe that: _using 8 exemplars in each prompt can further boost the accuracy of GSM8K to \(83.2\%\)_.
## 7 Related Work
Reasoning Skills.Researchers in the literature have proposed many benchmarks requiring various reasoning skills, including commonsense reasoning (Zellers et al., 2018; Talmor et al., 2019; Bhagavatula et al., 2019; Geva et al., 2021) numerical reasoning (Dua et al., 2019), multi-hop reasoning (Yang et al., 2018), arithmetic reasoning (Koncel-Kedziorski et al., 2015; Roy and Roth, 2015; Miao et al., 2020; Patel et al., 2021; Cobbe et al., 2021), logical reasoning (Liu et al., 2020; Yu et al., 2020), inductive reasoning (Sinha et al., 2019) and tabular reasoning (Chen et al., 2020; Zhu et al., 2021).
Reasoning with Symbolic Systems.Much research in the literature enhances the reasoning capabilities of machine learning systems by exploiting symbolic systems, including knowledge graphs (Mihaylov and Frank, 2018; Bauer et al., 2018; Kundu et al., 2019; Wang et al., 2019; Lin et al., 2019; Ding et al., 2019; Feng et al., 2020; Wang et al., 2022b), or question taxonomies (Dua et al., 2019; Andor et al., 2019; Hu et al., 2019; Wang et al., 2022a). Although these methods work well on specific benchmarks, they usually require domain-specific designs and human efforts, thus limiting the generalizability.
Reasoning via Language Models.This line of work aims to address reasoning tasks in a general sequence-to-sequence manner, empowered by reasoning-aware pre-training or fine-tuning of language models. For example, Deng et al. (2021)
\begin{table}
\begin{tabular}{l c c} \hline \hline & GSM8K & CommonsenseQA \\ \hline \hline \multicolumn{3}{l}{davinci:} \\ DiVeRSe (without step) & 30.6 & 75.0 \\ \hline DiVeRSe (with step) & **30.9** & **76.0** \\ \hline \multicolumn{3}{l}{text-davinci-002:} \\ DiVeRSe (without step) & 68.9 & 79.2 \\ DiVeRSe (with step) & **70.2** & **79.8** \\ \hline \multicolumn{3}{l}{code-davinci-002:} \\ DiVeRSe (without step) & **82.3** & 78.8 \\ \hline DiVeRSe (with step) & 81.5 & **79.9** \\ \hline \hline \end{tabular}
\end{table}
Table 6: The effectiveness of step-aware voting verifier, with \(\langle M_{1},M_{2}\rangle=\langle 5,20\rangle\).
Figure 7: GSM8K accuracy at different \(M\) values (how many reasoning paths are used for each question).
proposed to train the language model with crawled data from the internet; Asai and Hajishirzi (2020) proposed a logic-guided data augmentation method to pre-train the language model; Shen et al. (2021); Cobbe et al. (2021) proposed to train a verifier to rank solutions sampled from fine-tuned language models; Geva et al. (2020); Yoran et al. (2022); Campagna et al. (2020); Wang et al. (2022) proposed to equip language models with reasoning abilities by generating training examples with human-designed templates; Pi et al. (2022) proposed to inject reasoning capabilities into language models by continual pre-training on program execution data.
Reasoning via Prompting Gigantic Language Models.Gigantic language models like GPT-3 Brown et al. (2020) have demonstrated impressive few-shot learning capabilities in many tasks and have attracted many research interests on making gigantic language models better few-shot learners Zhao et al. (2021); Holtzman et al. (2021); Min et al. (2021); Liu et al. (2022); Lu et al. (2021); Rubin et al. (2021); Min et al. (2022). However, these methods struggle to address tasks requiring reasoning skills. To mitigate this, recently there is a line of research that focuses on unleashing the reasoning capabilities of gigantic language models via better prompting strategies. Wei et al. (2022) proposed _chain-of-thought reasoning_, of which the key insight is the insertion of multi-step reasoning paths before generating the final answers; Wang et al. (2022) proposed to improve chain-of-thought reasoning via _self-consistency_, of which the key insight is to conclude the most consistent answer from different reasoning paths sampled from the language model; Zhou et al. (2022); Creswell et al. (2022) proposed to leverage gigantic language models to decompose questions into sub-questions, thereby addressing them in an iterative manner; Kojima et al. (2022) proposed that gigantic language models can even be good zero-shot reasoners, by designing prompts that can induce language models to do reasoning step-by-step; Lampinen et al. (2022) proposed building a prompt by selecting examples and explanations together, thus substantially improving performance over selecting examples alone. Despite their great successes, these works come with their limitations. This paper is a continuation of this line of research, focusing on diverse verifier on reasoning steps.
## 8 Conclusion and Future Work
In this paper, we present DiVeRSe, a novel and general method to enhance the reasoning abilities of large language models. Our method builds on the idea of prompting language models with multi-step reasoning paths, but introduces three key innovations: diverse prompts, voting verifier, and stepwise verifier. The latter is especially novel and effective, as it verifies each reasoning step separately and we provides a detailed analysis of the model's behavior in each step. We demonstrate the superiority of DiVeRSe through extensive experiments. For instance, using _code-davinci-002_, our method achieves state-of-the-art performance on most reasoning tasks, surpassing the 540B PaLM model with previous prompting techniques.
There are many directions for our future work. (1) As discussed in Appendix B.2, we will continue to investigate how to reduce or recognize false positive pseudo exemplars. (2) We plan to investigate mechanisms to produce better diverse prompts than
Figure 8: DiVeRSe performance (_code-davinci-002_) on GSM8K with different sizes of the training dataset (without labeled reasoning paths).
Figure 9: DiVeRSe performance (_code-davinci-002_) on GSM8K when each prompt contains \(3/5/8\) exemplars.
simple sampling. (3) We will extend DiVeRSe to other tasks and continue to design better prompting techniques to elicit the power of gigantic language models.
## 9 Limitations
Computing Resources.Despite the surprising performance it achieves, our framework needs to be applied to large language models like GPT-3 or PaLM. Inference with these models costs more time and budgets than fine-tuning models like RoBERTa Liu et al. (2019).
Faithfulness.Although DiVeRSe can significantly improve the accuracy of final answers, we still cannot guarantee that the reasoning paths produced by the language models are 100 percent faithful. This is the key challenge and future direction for this line of research (chain-of-thought reasoning).
More Training Data.DiVeRSe needs more labeled data with well-annotated reasoning paths to construct diverse prompts, and it also needs a training dataset for supervising the verifier. However, from another point of view, this limitation can also be regarded as a contribution that studies how chain-of-thought reasoning can be further improved if we have more training data than just a few exemplars.
Human Evaluation of Reasoning Steps.We use human evaluation to measure the quality of the intermediate steps in reasoning paths since few current works provide reliable frameworks to evaluate the quality of reasoning steps.
## References
* D. Andor, L. He, K. Lee, and E. Pitler (2019)Giving BERT a calculator: finding operations and arguments with reading comprehension. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 5947-5952. External Links: Link Cited by: SS1.
* A. Asai and H. Hajishirzi (2020)Logic-guided data augmentation and regularization for consistent question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, pp. 5642-5650. External Links: Link Cited by: SS1.
* L. Bauer, Y. Wang, and M. Bansal (2018)Commonsense for generative multi-hop question answering tasks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 4220-4230. External Links: Link Cited by: SS1.
* C. Bhagavatula, R. L. Bras, C. Malaviya, K. Sakaguchi, A. Holtzman, H. Rashkin, D. Downey, S. Wen-tau Yih, and Y. Choi (2019)Abductive common-sense reasoning. External Links: 1905.02177 Cited by: SS1.
* T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. (2020)Language models are few-shot learners. Advances in neural information processing systems33, pp. 1877-1901. External Links: Link, 2004.1177 Cited by: SS1.
* G. Campagna, A. Foryciarz, M. Moradshahi, and M. Lam (2020)Zero-shot transfer learning with synthesized data for multi-domain dialogue state tracking. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, pp. 122-132. External Links: Link, 2004.1177 Cited by: SS1.
* W. Chen, H. Zha, Z. Chen, W. Xiong, H. Wang, and W. Yang Wang (2020)HybridQA: a dataset of multi-hop question answering over tabular and textual data. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online, pp. 1026-1036. External Links: Link, 2004.1177 Cited by: SS1.
* A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, et al. (2022)Palm: scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Cited by: SS1.
* K. Cobbe, V. Kosaraju, M. Bavarian, J. Hilton, R. Nakano, C. Hesse, and J. Schulman (2021)Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1037. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.0206 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.0206 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.0206 Cited by: SS1.
* M. C. D. Manning, R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.0206 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Online, Online, pp. 1026-1036. External Links: Link, 1602.02066 Cited by: SS1.
* M. C. D. Manning, R. Socher, and J. R. Socher (2016)A deep learning approach to language understanding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,Antonia Creswell, Murray Shanahan, and Irina Higgins. 2022. Selection-inference: Exploiting large language models for interpretable logical reasoning.
* Deng et al. (2021) Xiang Deng, Yu Su, Alyssa Lees, You Wu, Cong Yu, and Huan Sun. 2021. ReasonBERT: Pretrained to reason with distant supervision. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_, pages 6112-6127, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Ding et al. (2019) Ming Ding, Chang Zhou, Qibin Chen, Hongxia Yang, and Jie Tang. 2019. Cognitive graph for multi-hop reading comprehension at scale. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_, pages 2694-2703, Florence, Italy. Association for Computational Linguistics.
* Dua et al. (2019) Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_, pages 2368-2378, Minneapolis, Minnesota. Association for Computational Linguistics.
* Feng et al. (2020) Yanlin Feng, Xinyue Chen, Bill Yuchen Lin, Peifeng Wang, Jun Yan, and Xiang Ren. 2020. Scalable multi-hop relational reasoning for knowledge-aware question answering.
* Geva et al. (2020) Mor Geva, Ankit Gupta, and Jonathan Berant. 2020. Injecting numerical reasoning skills into language models. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_, pages 946-958, Online. Association for Computational Linguistics.
* Geva et al. (2021) Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. _Transactions of the Association for Computational Linguistics_, 9:346-361.
* He et al. (2022) Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, and Graham Neubig. 2022. Towards a unified view of parameter-efficient transfer learning. In _International Conference on Learning Representations_.
* He et al. (2021) Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Deberta: Decoding-enhanced bert with disentangled attention. In _International Conference on Learning Representations_.
* Holtzman et al. (2021) Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, and Luke Zettlemoyer. 2021. Surface form competition: Why the highest probability answer isn't always right.
* Houlsby et al. (2019) Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In _International Conference on Machine Learning_, pages 2790-2799. PMLR.
* Hu et al. (2021) Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. _arXiv preprint arXiv:2106.09685_.
* Hu et al. (2019a) Minghao Hu, Yuxing Peng, Zhen Huang, and Dongsheng Li. 2019a. A multi-type multi-span network for reading comprehension that requires discrete reasoning. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_, pages 1596-1606.
* Hu et al. (2019b) Minghao Hu, Yuxing Peng, Zhen Huang, and Dongsheng Li. 2019b. A multi-type multi-span network for reading comprehension that requires discrete reasoning. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_, pages 1596-1606, Hong Kong, China. Association for Computational Linguistics.
* Jin et al. (2022) Woojeong Jin, Yu Cheng, Yelong Shen, Weizhu Chen, and Xiang Ren. 2022. A good prompt is worth millions of parameters: Low-resource prompt-based learning for vision-language models. In _Proceedings of the 60th Annual Meetingof the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 2763-2775, Dublin, Ireland. Association for Computational Linguistics.
* Kojima et al. (2022) Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners.
* Koncel-Kedziorski et al. (2015) Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. 2015. Parsing algebraic word problems into equations. _Transactions of the Association for Computational Linguistics_, 3:585-597.
* Kundu et al. (2019) Souvik Kundu, Tushar Khot, Ashish Sabharwal, and Peter Clark. 2019. Exploiting explicit paths for multi-hop reading comprehension. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_, pages 2737-2747, Florence, Italy. Association for Computational Linguistics.
* Lampinen et al. (2022) Andrew K Lampinen, Ishita Dasgupta, Stephanie CY Chan, Kory Matthewson, Michael Henry Tessler, Antonia Creswell, James L McClelland, Jane X Wang, and Felix Hill. 2022. Can language models learn from explanations in context? _arXiv preprint arXiv:2204.02329_.
* Le Scao and Rush (2021) Teven Le Scao and Alexander M Rush. 2021. How many data points is a prompt worth? In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_, pages 2627-2636.
* Lin et al. (2019) Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019. KagNet: Knowledge-aware graph networks for commonsense reasoning. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_, pages 2829-2839, Hong Kong, China. Association for Computational Linguistics.
* Liu et al. (2022) Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022. What makes good in-context examples for GPT-3? In _Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures_, pages 100-114, Dublin, Ireland and Online. Association for Computational Linguistics.
* Liu et al. (2020) Jian Liu, Leyang Cui, Hanneng Liu, Dandan Huang, Yile Wang, and Yue Zhang. 2020. Logiqa: A challenge dataset for machine reading comprehension with logical reasoning.
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_.
* Lu et al. (2021) Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2021. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity.
* Miao et al. (2020) Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su. 2020. A diverse corpus for evaluating and developing english math word problem solvers. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_, pages 975-984.
* Mihaylov and Frank (2018) Todor Mihaylov and Anette Frank. 2018. Knowledge-eadeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 821-832, Melbourne, Australia. Association for Computational Linguistics.
* Min et al. (2021) Sewon Min, Mike Lewis, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2021. Metaicl: Learning to learn in context.
* Min et al. (2022) Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work?
* Patel et al. (2021) Arkil Patel, Satwik Bhattacharya, and Navin Goyal. 2021. Are nlp models really able to solve simple math word problems? In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_, pages 2080-2094.
Xinyu Pi, Qian Liu, Bei Chen, Morteza Ziyadi, Zeqi Lin, Yan Gao, Qiang Fu, Jian-Guang Lou, and Weizhu Chen. 2022. Reasoning like program executors. _arXiv preprint arXiv:2201.11473_.
* Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. _OpenAI blog_, 1(8):9.
* Roy and Roth (2015) Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_, pages 1743-1752.
* Rubin et al. (2021) Ohad Rubin, Jonathan Herzig, and Jonathan Berant. 2021. Learning to retrieve prompts for in-context learning.
* Shen et al. (2021) Jianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin Jiang, Ming Zhang, and Qun Liu. 2021. Generate & rank: A multi-task framework for math word problems. In _Findings of the Association for Computational Linguistics: EMNLP 2021_, pages 2269-2279.
* Sinha et al. (2019) Koustuv Sinha, Shagun Sodhani, Jin Dong, Joelle Pineau, and William L Hamilton. 2019. Clutrr: A diagnostic benchmark for inductive reasoning from text. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_, pages 4506-4515.
* Talmor et al. (2019) Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. Commonsenseqa: A question answering challenge targeting commonsense knowledge. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_, pages 4149-4158.
* Wang et al. (2022a) Siyuan Wang, Wanjun Zhong, Duyu Tang, Zhongyu Wei, Zhihao Fan, Daxin Jiang, Ming Zhou, and Nan Duan. 2022a. Logic-driven context extension and data augmentation for logical reasoning of text. In _Findings of the Association for Computational Linguistics: ACL 2022_, pages 1619-1629, Dublin, Ireland. Association for Computational Linguistics.
* Wang et al. (2019) Xiaoyan Wang, edu Kapanipathi, Ryan Musa, Mo Yu, Kartik Talamadupula, Ibrahim Abdelaziz, Maria Chang, Achille Fokoue, Bassem Makni, Nicholas Mattei, and Michael Witbrock. 2019. Improving natural language inference using external knowledge in the science questions domain. In _Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence_, AAAI'19/IAAI'19/EAAI'19. AAAI Press.
* Wang et al. (2022b) Xiting Wang, Kunpeng Liu, Dongjie Wang, Le Wu, Yanjie Fu, and Xing Xie. 2022b. Multi-level recommendation reasoning over knowledge graphs with reinforcement learning. In _The Web Conference 2022_.
* Wang et al. (2022c) Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022c. Self-consistency improves chain of thought reasoning in language models.
* Wei et al. (2022) Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. _arXiv preprint arXiv:2201.11903_.
* Xu et al. (2021) Yichong Xu, Chenguang Zhu, Shuohang Wang, Siqi Sun, Hao Cheng, Xiaodong Liu, Jianfeng Gao, Pengcheng He, Michael Zeng, and Xuedong Huang. 2021. Human parity on commonsenseqa: Augmenting self-attention with external attention. _arXiv preprint arXiv:2112.03254_.
* Yang et al. (2018) Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_, pages 2369-2380, Brussels, Belgium. Association for Computational Linguistics.
* Yoran et al. (2022) Ori Yoran, Alon Talmor, and Jonathan Berant. 2022. Turning tables: Generating examples from semi-structured tables for endowing language models with reasoning skills. In _Proceedings of the 60thAnnual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 6016-6031, Dublin, Ireland. Association for Computational Linguistics.
* Yu et al. (2020) Weihao Yu, Zihang Jiang, Yanfei Dong, and Jiashi Feng. 2020. Reclor: A reading comprehension dataset requiring logical reasoning.
* Zelikman et al. (2022) Eric Zelikman, Yuhuai Wu, and Noah D Goodman. 2022. Star: Bootstrapping reasoning with reasoning. _arXiv preprint arXiv:2203.14465_.
* Zellers et al. (2018) Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversarial dataset for grounded commonsense inference. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_, pages 93-104, Brussels, Belgium. Association for Computational Linguistics.
* Zhao et al. (2021) Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models.
* Zhou et al. (2022) Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. 2022. Least-to-most prompting enables complex reasoning in large language models.
* Zhu et al. (2021) Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and Tat-Seng Chua. 2021. Tat-qa: A question answering benchmark on a hybrid of tabular and textual content in finance.
This is the Appendix for the paper: "On the Advance of Making Language Models Better Reasoners".
## Appendix A Preliminaries
Prompting.Prompting means prepending a few exemplars to the task input \(\mathbf{x}\) and generating the output \(\mathbf{y}\) from the pretrained language model:
\[p(\mathbf{y}|C,\mathbf{x})=\prod_{t=1}^{|\mathbf{y}|}p_{\text{LM}}(y_{t}|C, \mathbf{x},y_{<t}), \tag{3}\]
where \(C\) is the concatenation of \(K\) exemplars:
\[C=(\overline{\mathbf{x}}_{1},\overline{\mathbf{y}}_{1});(\overline{\mathbf{x }}_{2},\overline{\mathbf{y}}_{2});...;(\overline{\mathbf{x}}_{K},\overline{ \mathbf{y}}_{K}). \tag{4}\]
We denote **prompt** as the concatenation of the exemplars \(C\) and the input \(\mathbf{x}\).
Reasoning Paths.For reasoning tasks that aim to generate an answer \(\mathbf{y}\) for a question \(\mathbf{x}\), Wei et al. (2022) proposed the insertion of a reasoning path \(\mathbf{z}\) before generating the answer \(\mathbf{y}\):
\[C^{\prime}=(\overline{\mathbf{x}}_{1},\overline{\mathbf{z}}_{1},\overline{ \mathbf{y}}_{1});...;(\overline{\mathbf{x}}_{K},\overline{\mathbf{z}}_{K}, \overline{\mathbf{y}}_{K}), \tag{5}\]
where \(\mathbf{z}_{i}\) is a text **reasoning path** of how the answer \(\mathbf{y}_{i}\) is reasoned step-by-step for question \(\mathbf{x}_{i}\).
Then, during inference, a reasoning path \(\mathbf{z}\) will be generated before the answer \(\mathbf{y}\):
\[p(\mathbf{y}|C^{\prime},\mathbf{x})=p(\mathbf{z}|C^{\prime},\mathbf{x})\cdot p (\mathbf{y}|C^{\prime},\mathbf{x},\mathbf{z}). \tag{6}\]
Figure 10 demonstrates this idea in arithmetic reasoning (GSM8K), and Table 7 demonstrates this idea in commonsense reasoning (StrategyQA) and inductive reasoning (CLUTRR).
## Appendix B Boosting Reasoning Paths via Self-Teaching
In this section, we first introduce self-teaching, the method we use to construct a larger exemplar base when the original dataset does not contain enough data with well-annotated reasoning paths (Appendix B.1). We then discuss the noise issue when facing multiple-choice tasks (Appendix B.2).
### Self Teaching
A critical issue of DiVeRSe is **how to provide diverse prompts**.6 Supposing that there is an exemplar base \(E\), we can sample \(K\) exemplars from it to construct a prompt, and repeat this \(M_{1}\) times independently to construct \(M_{1}\) prompts with diverse exemplars.
Footnote 6: Wang et al. (2022c) tried an ensemble-based approach, i.e., to permutate exemplars in the original prompt. However, this strategy does not increase diversity in terms of exemplars.
For scenarios that do not have sufficient exemplars (i.e., \(|E|<K*M_{1}\)), we propose to **bootstrap the diversity of prompts by "self-teaching"**, i.e., generating pseudo reasoning paths from a few exemplars and some \(\langle\text{question},\text{answer}\rangle\) pairs without reasoning paths.7 Suppose that \(D\) is a dataset without reasoning paths, consisting of
\begin{table}
\begin{tabular}{l} \hline
**[StrategyQA]** Yes or no: Could a llam birth twice \\ during War in Vietnam (1945-46)? \(\triangleright\)_The War in Vietnam was 6 months. The gestation period for a llama is 11 months. So a llama could not give birth twice during the War in Vietnam. The answer is **no**._ \\ \hline
**[CLUTRR]** Roy was eating lunch with his son John and his wife Mary. What kind of relative is John to Mary? \(\triangleright\)_ \\ _John is the son of Roy. Roy is the husband of Mary. Thus, John is the son of Mary. The answer is **son**._ \\ \hline \hline \end{tabular}
\end{table}
Table 7: Besides arithmetic reasoning, we also investigate commonsense and inductive reasoning.
Figure 10: Prompting large language models to generate different reasoning paths, then selecting the final answer via majority voting (Wang et al., 2022c).
\((\mathbf{x},\mathbf{y}^{*})\) pairs. Given the small exemplar base \(E\), for each \((\mathbf{x},\mathbf{y}^{*})\in D\), we can use prompting to generate a reasoning path \(\mathbf{z}\) and the predicted answer \(\mathbf{y}\). We define the pseudo exemplar base \(E^{\prime}\) as:
\[E^{\prime}=\{(\mathbf{x},\mathbf{z},\mathbf{y})|(\mathbf{x},\mathbf{y}^{*})\in D,\mathbf{y}=\mathbf{y}^{*}\}, \tag{7}\]
then \(E\cup E^{\prime}\) can be regarded as the new exemplar base for generating diverse prompts.
### Noises in Multiple Choice Tasks
In our experimental setup, StrategyQA and CommonsenseQA are more challenging than other tasks, as they use pseudo exemplars generated through "self-teaching" (Appendix B.1).
"Self-teaching" may lead to bad exemplars, whose reasoning paths are invalid but happen to yield answers coinciding with the ground truth. Questions in StrategyQA/CommonsenseQA are two-choice/four-choice questions, respectively. Therefore, such noise would be more serious in StrategyQA than in CommonsenseQA. This somehow explains why DiVeRSe can achieve comparable performance (\(-0.8\%\)) as the PaLM-based SOTA on CommonsenseQA, while it sees a \(3.0\%\) performance decline to PaLM on StrategyQA, which has only two choices. In other words, it is easier for StrategyQA to yield a right answer but a misleading reasoning path.
## Appendix C Data Statistics
Table 8 shows the reasoning benchmarks we use in this paper with examples. We use the same test sets as Wei et al. (2022) for GSM8K, AsDiv, MultiArith, SVAMP, SingleEq, and CommonsenseQA.
For StrategyQA, there are \(2,290\) test cases (i.e., questions paired with TRUE/FALSE labels), but there is no other case that can be leveraged by DiVeRSe to construct diverse exemplars (as introduced in Section 2.1). To address this problem, we randomly divide these \(2,290\) test cases into two equal parts (denoted as \(T_{1}\) and \(T_{2}\)). For each Di
\begin{table}
\begin{tabular}{l c l} \hline \hline Dataset & \(N\) & Example Question \\ \hline GSM8K & 1319 & James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week? \\ \hline AsDiv & 2096 & Seven red apples and two green apples are in the basket. How many apples are in the basket? \\ \hline MultiArith & 600 & The school cafeteria ordered 42 red apples and 7 green apples for students lunches. But, if only 9 students wanted fruit, how many extra did the cafeteria end up with? \\ \hline SVAMP & 1000 & Paco had 26 salty cookies and 17 sweet cookies. He ate 14 sweet cookies and 9 salty cookies. How many salty cookies did Paco have left? \\ \hline SingleEq & 508 & Terez has 44 cows on his farm. 50 percent of the cows are female, and 50 percent of the females are pregnant. How many pregnant female cows does Terez have? \\ \hline CommonsenseQA & 3387 & Sammy wanted to go to where the people were. Where might he go? Options: (a) race track (b) populated areas (c) desert (d) apartment (e) roadblock \\ \hline StrategyQA & 2280 & Could you go to New York Public Library and the Six Flags Great Escape in the same day? \\ \hline CLUTRR & 447 & Kelly and her mother Ernest made breakfast together. Constance and her husband Ernest wanted a child badly What kind of relative is Kelly to Constance? The possible relationships are: sister, son, aunt, granddaughter, father, grandfather, grandmother, mother-in-law, uncle, niece, mother, brother, daughter, nephew, grandson, son-in-law, father-in-law, daughter-in-law. \\ \hline \hline \end{tabular}
\end{table}
Table 8: Reasoning benchmarks we use in this paper with examples. \(N\) means the number of test cases.
[MISSING_PAGE_FAIL:16]
[MISSING_PAGE_EMPTY:17] | # Making Large Language Models Better Reasoners with Step-Aware Verifier
Yifei Li\({}^{1,2}\); Zeqi Lin\({}^{2}\), Shizhuo Zhang\({}^{2}\), Qiang Fu\({}^{2}\), Bei Chen\({}^{2}\),
Jian-Guang Lou\({}^{2}\), Weizhu Chen\({}^{2}\)
\({}^{1}\) National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University
\({}^{2}\) Microsoft Corporation
{yifeili, zeqi.lin, v-shizzhang, qifu, bei.chen, jlou, wzchen}@microsoft.com
liyifei@stu.pku.edu.cn
Work was done during an internship at Microsoft Research Asia.
###### Abstract
Few-shot learning is a challenging task that requires language models to generalize from limited examples. Large language models like GPT-3 and PaLM have made impressive progress in this area, but they still face difficulties in reasoning tasks such as GSM8K, a benchmark for arithmetic problems. To improve their reasoning skills, previous work has proposed to guide the language model with prompts that elicit a series of reasoning steps before giving the final answer, achieving a significant improvement on GSM8K from \(17.9\%\) to \(58.1\%\) in problem-solving rate. In this paper, we present DiVeRSe (Diverse Verifier on Reasoning Step), a novel approach that further enhances the reasoning capability of language models. DiVeRSe has three main components: first, it generates diverse prompts to explore different reasoning paths for the same question; second, it uses a verifier to filter out incorrect answers based on a weighted voting scheme; and third, it verifies each reasoning step individually instead of the whole chain. We evaluate DiVeRSe on the latest language model _code-davinci-002_ and show that it achieves new state-of-the-art results on six of eight reasoning benchmarks (e.g., GSM8K \(74.4\%\to 83.2\%\)).
## 1 Introduction
Large pretrained language models (PLMs) have shown remarkable performance on various natural language processing tasks, either by few-shot learning with prompts (Radford et al., 2019; Le Scao and Rush, 2021; Jin et al., 2022) or by fine-tuning (Houlsby et al., 2019; Hu et al., 2021; He et al., 2022). However, despite the increasing size and capacity of PLMs such as GPT-3 with 175B parameters (Brown et al., 2020) and PaLM with 540B parameters (Chowdhery et al., 2022), their reasoning abilities are still limited and often require multiple steps to produce correct answers, especially for tasks involving arithmetic, commonsense, or inductive reasoning (Cobbe et al., 2021).
Recent works (Wei et al., 2022; Zhou et al., 2022; Kojima et al., 2022; Lampinen et al., 2022) have demonstrated that PLMs possess some latent reasoning capabilities, but they need carefully designed prompts to activate them. For instance, Wei et al. (2022) proposed chain-of-thought reasoning, which inserts multi-step reasoning paths before generating the final answers, and achieved significant improvement on the GSM8K arithmetic benchmark (Cobbe et al., 2021). Wang et al. (2022) further introduced a voting mechanism to select the most consistent answer among different reasoning paths, and achieved state-of-the-art results on several reasoning benchmarks using the PaLM model (Chowdhery et al., 2022). Building on these successes, this paper continues this line of research and advances the reasoning capabilities of PLMs in three aspects, as illustrated in Figure 1.
First, we propose to increase the diversity of reasoning paths by not only sampling from a single prompt, but also varying the prompt itself. We hypothesize that different prompts can elicit different ways of thinking, while the correct answer should be robust to these variations. Second, we propose to use a verifier to score the quality of each reasoning path and guide the voting mechanism. We argue that not all reasoning paths are equally good
Figure 1: Our proposed method, DiVeRSe (**Diverse Verifier on Reasoning Step**).
or reliable, and some may contain errors or inconsistencies that can be detected by the verifier. Third, we propose to assign a fine-grained label to each step of the reasoning path and use a step-aware verifier to attribute the correctness or wrongness of the final answer to each step. We conjecture that some steps may be correct but followed by wrong steps or vice versa, and identifying these cases can help diagnose and improve the reasoning process.
We name our method as DiVeRSe (diverse verifier on reasoning step) and evaluate it on eight reasoning benchmarks that require different types of reasoning skills. We use three OpenAI PLMs (_davinci_, _text-davinci-002_, and _code-davinci-002_) and compare our results with recent state-of-the-art methods. We find that DiVeRSe can consistently and significantly improve the performance of PLMs on these tasks, and achieve new state-of-the-art results on six of them1: GSM8K (\(74.4\%\to 83.2\%\)), AsDiv (\(81.9\%\to 88.7\%\)), MultiArith (\(99.3\%\to 99.8\%\)), SVAMP(\(86.6\%\to 87.0\%\)), SingleEq (\(79.5\%\to 94.9\%\)), and CLUTRR (\(67.0\%\to 95.9\%\)).
Footnote 1: Most of the previous SOTA results were achieved by self-consistency on PaLM-540BChowdhery et al. (2022).
Our data is publicly available at [https://github.com/microsoft/DiVeRSe](https://github.com/microsoft/DiVeRSe).
## 2 Diverse Verifier on Reasoning Step
Figure 1 shows the overview of DiVeRSe. The key insights are three-fold: (1) leveraging _diverse prompts_ to induce more diverse reasoning paths from the language models (Section 2.1); (2) training a _voting verifier_ to better derive the final answers from multiple reasoning paths (Section 2.2); (3) leveraging _step correctness_ to further boost the voting verifier (Section 2.3).
### Diverse Prompts
To reason effectively, it is beneficial to explore diverse reasoning paths, following the idea that "_All Roads lead to Rome_". Wang et al. (2022) proposed to generate various reasoning paths from language models by _sampling decoding_. However, their method relies on a fixed set of exemplars for all prompts, which may introduce bias and limit the diversity of the generated reasoning paths. To address this issue, we randomly select \(M_{1}\) different prompts for each question, and then sample \(M_{2}\) reasoning paths for each prompt using sampling decoding. This way, we obtain \(M=M_{1}\times M_{2}\) diverse reasoning paths for each question.2
Footnote 2: Our main experiments use \(M_{1}=5\) and \(M_{2}=20\).
### Voting Verifier
Verifier.The verifier takes a question and a candidate reasoning path as input, and outputs the probability that the reasoning path leads to the correct answer. We use _deberta-v3-large_He et al. (2021) as the backbone model, with a small scalar head that outputs predictions on the \([\mathbf{CLS}]\) token.
Training the verifier.For each training question, we generate multiple candidate reasoning paths using chain-of-thought reasoning. We regard the reasoning paths that match the ground truth final answer as positive, and the others as negative.
Voting Verifier.Wang et al. (2022) use _majority voting_ to aggregate the predictions of different reasoning paths. This method may fail when the majority of the reasoning paths are misled, while the minority of the reasoning paths are reasonable. We propose _voting verifier_, which leverages both _voting_ and _verifier_:
\[\hat{\mathbf{y}}=\operatorname*{arg\,max}_{\mathbf{y}}\sum_{i=1}^{M}\mathbbm{1 }_{\mathbf{y}_{i}=\mathbf{y}}\cdot f(\mathbf{x}_{i},\mathbf{z}_{i},\mathbf{ y}_{i}), \tag{1}\]
where \(\mathbbm{1}_{\mathbf{y}_{i}=\mathbf{y}}\) is an indicator function that returns 1 (or 0) if \(\mathbf{y}_{i}=\mathbf{y}\) (or not), and \(f(\cdot)\) is the probability produced by the verifier.
### Step-aware Voting Verifier
Each reasoning path consists of several steps. We hypothesize that not all the steps in an incorrect
Figure 2: Chain-of-thought reasoning for GSM8K math word problem. The prompt is colored black and the reasoning path produced by the language model is colored teal. This reasoning path contains two reasoning steps.
reasoning path are equally wrong, and some steps may still be useful for reasoning. To exploit this, we extend the voting verifier to a step-aware voting verifier by introducing an extended loss function:
\[\begin{split}\mathcal{L}=\mathcal{L}_{0}+\alpha\cdot\mathcal{L}_{1}, \\ \mathcal{L}_{1}=\sum_{i=1}^{|\hat{D}|}\sum_{j=1}^{|S_{i}|}\!\! \text{BCE}(\text{label}_{i,j},f^{\prime}(\text{input}_{i},j)).\end{split} \tag{2}\]
\(\alpha\) is a hyperparameter to balance the original loss \(\mathcal{L}_{0}\) and the step-level auxiliary loss \(\mathcal{L}_{1}\); \(S_{i,1},S_{i,2},...,S_{i,|S_{i}|}\) are the steps in \(\mathbf{z}_{i}\); \(\text{label}_{i,j}\) indicates whether \(S_{i,j}\) is correct or not; \(f^{\prime}(\text{input}_{i},j)\) represents the probability of the positive label for \(S_{i,j}\).3
Footnote 3: Specifically, \(f^{\prime}(\text{input}_{i},j)\) is predicted from the hidden state of the last token of \(S_{i,j}\) in deberta-v3-large, similar to token classification tasks.
**To obtain the step-level labels** (i.e., \(\text{label}_{i,j}\)) for negative training data with wrong answers, we design an algorithm that compares intermediate results among steps in positive/negative reasoning paths. Figure 3 illustrates this algorithm. This algorithm can not only work on math word problems, but also generalize to other reasoning tasks: we use an off-the-shelf natural language inference model, _roberta-large-mnli_(Liu et al., 2019), to check whether two reasoning steps are semantically equivalent or not. Given a reasoning step, if we cannot find any semantically equivalent step in the positive reasoning paths, we label it and all the subsequent steps as negative steps.
## 3 Experimental Setup
### Reasoning Tasks
Arithmetic Reasoning.Following Wang et al. (2022c), we use AsDiv Miao et al. (2020), SingleEq Koncel-Kedziorski et al. (2015), MultiArith Roy and Roth (2015), SVAMP Patel et al. (2021), and GSM8K Cobbe et al. (2021).
Commonsense Reasoning.Following Wang et al. (2022c), we use CommonsenseQA Talmor et al. (2019) and StrategyQA Geva et al. (2021).
Inductive Reasoning.We use CLUTRR Sinha et al. (2019), a diagnostic benchmark for inductive reasoning, requiring inferring kinship relations between characters in short stories.
### Details
Language Models.We use three OpenAI language models: _davinci_, _text-davinci-002_ and _code-davinci-002_. We use the default parameters except a temperature of \(0.5\) in sampling.
Exemplars.For arithmetic/commonsense/inductive reasoning, each prompt contains \(5/7/7\) exemplars. For DiVeRSe, each question has \(5\) different prompts, and \(20\) reasoning paths are sampled from the language model for each prompt. For arithmetic reasoning, the exemplars are randomly sampled from the training dataset of GSM8K; for CLUTRR, the exemplars are sampled from its training dataset, with reasoning paths synthesized by handcraft rules (detailed settings for CLUTRR are listed in Appendix D); for StrategyQA and CommonsenseQA, their original datasets do not contain enough exemplars with well-annotated reasoning paths, so we construct \(1,000\) pseudo exemplars by "self-teaching" (the approach and the noise issue are discussed in Appendix B) from "seed" exemplars provided by Wei et al. (2022).
Training Datasets.For each task, we sample \(1,000\)\(\langle\text{question},\text{answer}\rangle\) pairs from the training dataset to train the verifier.
Verifier.We fine-tune _deberta-v3-large_(He et al., 2021) with learning rate \(1\times 10^{-5}\) and batch size \(128\). For the step-aware verifier, we select the best \(\alpha\) among \(0.0/0.1/0.2/0.3\).
Figure 3: How step-level labels are extracted. This figure shows four reasoning paths for a math word problem: the first two are positive and the bottom two are negative. The path \(7\to 9\to 18\) means that the first step calculates 7, the second step calculates 9, and the third step calculates the final answer 18. For the last path, the third step (which calculates \(8\)) has never occurred in any positive reasoning paths, thus we regard this step and all steps after it as negative steps.
## 4 Main Results
Table 1 shows the overall experimental results. We mainly compare DiVeRSe with two baselines: (1) greedily decoding a single reasoning path (Wei et al., 2022), referred to as _Greedy Decode_; (2) sampling \(100\) reasoning paths, then select the final answer via majority voting (Wang et al., 2022c), referred to as _Self-Consistency_.
### Effectiveness
Experimental results clearly demonstrate that DiVeRSe can bring significant and consistent improvements over recent strong baselines. The improvements are across different models (_davinci_, _text-davinci-002_ and _code-davinci-002_) as well as different reasoning skills (eight tasks in three reasoning skills). Taking GSM8K as an example, compared to _Greedy Decoding_ and _Self-Consistency_, DiVeRSe brings improvements of \(22.2\%/12.0\%\) on _davinci_, \(33.1\%/12.0\%\) on _text-davinci-002_, and \(27.0\%/5.6\%\) on _code-davinci-002_. Compared to _Self-Consistency_, DiVeRSe achieves average improvements of \(5.6\%/5.1\%/54.3\%\) on the three reasoning skills, respectively.
### Comparing to Previous SOTAs
In Table 1, we also compare DiVeRSe with: (1) previous SOTA results based on fine-tuning; (2) recent SOTA results (Wei et al., 2022) based on PaLM (Chowdhery et al., 2022), a gigantic language model with 540 billion parameters.4
Footnote 4: DiVeRSe can also be applied to PaLM, but PaLM is not publicly available.
On all the five arithmetic reasoning tasks, DiVeRSe (with _code-davinci-002_) achieves new SOTA results, with an average improvement of \(6.2\%\). On the two commonsense reasoning tasks, the performance of DiVeRSe is slightly lower (\(-1.9\%\)) than that of PaLM-based self-consistency. We speculate that the reason might be: these two commonsense reasoning tasks are multiple-choice tasks rather than open-ended generation tasks, re
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Method & GSM8K & AdvInv & MultiAvitch & SVAMP & Singlefa & CommonsenceQA & StrategyQA & CLUTRR \\ \hline Precision SOTA (fine-tuning) & 57\({}^{\text{a}}\) & 75.3\({}^{\text{b}}\) & 60.5\({}^{\text{c}}\) & 57.4\({}^{\text{d}}\) & 32.5\({}^{\text{e}}\) & 91.2\({}^{\text{e}}\) & 73.9\({}^{\text{g}}\) & 67.0\({}^{\text{h}}\) \\
sulting in more false-positive exemplars in the pseudo exemplar base (Details will be discussed in Section B.2). Regarding inductive reasoning, D1-VeRSe achieves a surprisingly good performance of \(95.9\%\) on the CLUTRR task, outperforming (\(+28.9\%\)) previous SOTA result with fine-tuning (Sinha et al., 2019).5
Footnote 5: Sinha et al. (2019) also introduced a method with \(100\%\) accuracy. We do not take it into the comparison, as this method requires a domain-specific system with complicated rules to extract a knowledge graph for each input text.
## 5 Case Study
Table 2 shows an example of step-level scores given by the step-aware verifier. Steps in the correct reasoning path have relatively high scores, while the scores in the wrong reasoning path show where the path starts to be wrong. This indicates that besides improving the performance, the step-aware verifier can also bring interpretability to show the step-level correctness. We also show some extra examples of majority-voting in Table 10.
## 6 Analysis
We also conduct ablation experiments and analysis to investigate the keys to the success of D1VeRSe.
### The Effectiveness of Diverse Prompts
By diversifying both prompts and reasoning paths (\(\langle M_{1}=5,M_{2}=20\rangle\)), we consistently improve performance over the sampling decoding approach (\(\langle M_{1}=1,M_{2}=100\rangle\)) of Wang et al. (2022c), as shown in Table 3. Both methods use majority voting. Table 4 further reveals that neither only using diverse prompts nor only using sampling is optimal. In other words, _the best performance is achieved by combining diverse prompts and sampling_. Moreover, Figure 4 demonstrates that _diverse prompts lead to more diverse reasoning paths_. We hypothesize that this diversity contributes to the performance improvement by: (1) making correct results more distinguishable from varied errors during inference; and (2) providing more diverse negative samples for enhancing the verifier's generalizability during training.
### The Effectiveness of Voting Verifier
We compare three algorithms to conclude the agreement from diverse reasoning paths: majority voting, verifier, and voting verifier. Table 5 shows the results. _Compared to majority voting, our voting verifier can significantly and consistently boost reasoning performance across different tasks and different language models_. Verifier without voting often outperforms majority voting, but extending it to voting verifier can further boost the performance.
\begin{table}
\begin{tabular}{l c} \hline \hline \(\langle M_{1},M_{2}\rangle\) & GSM8K \\ \hline \(M_{1}=1,M_{2}=100\) & 76.7 \\ \(M_{1}=5,M_{2}=20\) & **80.0** \\ \(M_{1}=10,M_{2}=10\) & 79.8 \\ \(M_{1}=100,M_{2}=1\) & 73.0 \\ \hline \hline \end{tabular}
\end{table}
Table 4: GSM8K majority voting results for different \(\langle M_{1},M_{2}\rangle\) settings on _code-davinci-002_.
Figure 4: Diverse prompts increase the diversity of GSM8K reasoning paths and their final answers. This is beneficial for the voting verifier. Left: the average number of distinct reasoning paths per question (we consider two reasoning paths to be the same if they have the same intermediate result chain as shown in Figure 3). Right: the average number of distinct final answers per question.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & GSM8K & CQA & CLUTRR \\ \hline \multicolumn{4}{l}{davicni:} \\ \(M_{1}=1,M_{2}=100\) & 18.9 & 57.4 & 42.5 \\ \(M_{1}=5,M_{2}=20\) & **21.3** & **57.5** & **45.9** \\ \hline \multicolumn{4}{l}{text-davicni-002:} \\ \(M_{1}=1,M_{2}=100\) & 58.2 & 72.9 & 34.9 \\ \(M_{1}=5,M_{2}=20\) & **61.3** & **77.3** & **35.6** \\ \hline \multicolumn{4}{l}{code-davicni-002:} \\ \(M_{1}=1,M_{2}=100\) & 76.7 & 77.3 & 35.6 \\ \(M_{1}=5,M_{2}=20\) & **80.0** & **78.8** & **43.8** \\ \hline \hline \end{tabular}
\end{table}
Table 3: The effectiveness of diverse prompts (\(\langle 5,20\rangle\)) compared to pure sampling decoding (Wang et al., 2022c), under majority voting.
### The Effectiveness of Step-aware Verifier
We evaluate the impact of incorporating step-level information into the voting verifier of DiVeRSe. Table 6 shows the performance of DiVeRSe with and without the step-aware mechanism on both the GSM8K and the CommonsenseQA datasets. We find that _using the step-aware verifier improves the performance in most of the experiments_. The only exception is _code-davinci-002_ on GSM8K, where the step-aware verifier slightly lowers the performance. We hypothesize that _code-davinci-002_ is more capable of generating high-quality reasoning paths, and thus does not benefit much from the step-level information.
**Detailed Human Evaluation of Reasoning Steps.** We further analyze the quality of generated reasoning steps, by asking human annotators to judge whether the GSM8K reasoning steps produced by DiVeRSe (with/without step-aware mechanism) are good or not. Here "good" means not only correct formulas and calculation results but also textual fluency and logical coherence.
We further examine the quality of the reasoning steps generated by DiVeRSe (with/without step-aware mechanism) for GSM8K, by asking human annotators to rate them based on correctness, fluency, and coherence. For each test question, we compare three reasoning paths produced by _code-davinci-002_: the one with the highest verifier score, the one with the highest step-aware verifier score, and a randomly chosen one. The annotators (master students) label any incorrect or unsatisfactory reasoning steps in each path (single-blind) and explain why. We collect annotations for 200 test questions, half of which have correct final answers from all three paths, and half of which have incorrect final answers from all three paths.
We find that **all the reasoning paths with correct final answers are also correct in every intermediate step**, which shows that _code-davinci-002_ can reliably generate accurate reasoning steps, not just lucky guesses. However, we also find that **many of the correct reasoning paths have unnecessary steps**. Figure 5(a) shows that \(40\%\) of the random paths have redundant steps, and the verifier can lower this percentage to \(31\%\). We also find that **the step-aware verifier can further eliminate redundant reasoning steps** from \(31\%\) to \(20\%\).
Furthermore, for the incorrect reasoning paths, we find that **the step-aware mechanism helps produce more correct steps before making mistakes**. For each failed test question, we compare the number of correct steps in the path with the highest verifier score and the path with the highest step-aware verifier score (by human evaluation). Figure 5(b)
Figure 5: Human evaluation on GSM8K shows the effectiveness of the step-aware mechanism for verifier.
Figure 6: The distribution of error types in incorrect reasoning steps.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & GSM8K & CQA & CLUTRR \\ \hline \hline \multicolumn{3}{l}{davinci:} \\ \hline Voting & 21.3 & 57.4 & 45.9 \\ Verifier & 27.0 & 74.1 & **93.2** \\ \hline \multicolumn{3}{l}{Voting Verifier} & **30.6** & **75.0** & 92.5 \\ \hline \multicolumn{3}{l}{text-davinci-002:} \\ \hline Voting & 61.3 & 77.3 & 35.6 \\ Verifier & 62.7 & 77.9 & **93.8** \\ \hline \multicolumn{3}{l}{Voting Verifier} & **68.9** & **79.2** & **93.8** \\ \hline \multicolumn{3}{l}{code-davinci-002:} \\ \hline Voting & 80.0 & 75.4 & 43.8 \\ Verifier & 65.9 & **78.8** & **95.9** \\ \hline \multicolumn{3}{l}{Voting Verifier} & **82.3** & **78.8** & **95.9** \\ \hline \hline \end{tabular}
\end{table}
Table 5: The effectiveness of voting verifier. All experiments in this table use \(\langle M_{1},M_{2}\rangle=\langle 5,20\rangle\).
shows that for \(33\%\)/\(17\%\) of the failed test cases, the step-aware verifier generates more/fewer correct steps than the verifier without the step-aware mechanism.
Step Error Types.Figure 6 shows the distribution of error types in the incorrect reasoning steps. We see that \(95\%\) of the errors are caused by incorrect formulations (i.e., using wrong intermediate results or operators and generating invalid formulas, which lead to incorrect answers). We also see that, although _code-davinci-002_ often makes division calculation errors (e.g., \(10/3=3\)), both the verifier and the step-aware verifier can effectively assign low scores to such paths, thus improving the performance.
### How Many Diverse Outputs Do We Need?
Figure 7 shows the accuracy at different \(M\) values, where \(M\) is the number of reasoning paths sampled from the \(100\) generated paths for each question. We observe that: (1) the accuracy increases with more reasoning paths, but the improvement becomes marginal at \(M\geq 50\); (2) DiVeRSe outperforms self-consistency significantly and consistently at different \(M\) values.
### How Many Training Data Do We Need?
DiVeRSe requires a dataset with reasoning paths for training the verifier. Figure 8 shows how the size of this dataset affects the performance. We observe that: the performance is only reduced by about \(2\%\), even if the size of training data is cut by \(75\%\) (from \(1,000\) to \(250\)). With the same reasoning paths, voting verifier performs better than majority voting, while verifier without voting causes significant performance drops.
### The Impact of the Number of Exemplars
We conduct experiments for \(k=3/5/8\) (\(k\) is the number of exemplars used in each prompt) on GSM8K. Figure 9 shows the results. We observe that: _using 8 exemplars in each prompt can further boost the accuracy of GSM8K to \(83.2\%\)_.
## 7 Related Work
Reasoning Skills.Researchers in the literature have proposed many benchmarks requiring various reasoning skills, including commonsense reasoning (Zellers et al., 2018; Talmor et al., 2019; Bhagavatula et al., 2019; Geva et al., 2021) numerical reasoning (Dua et al., 2019), multi-hop reasoning (Yang et al., 2018), arithmetic reasoning (Koncel-Kedziorski et al., 2015; Roy and Roth, 2015; Miao et al., 2020; Patel et al., 2021; Cobbe et al., 2021), logical reasoning (Liu et al., 2020; Yu et al., 2020), inductive reasoning (Sinha et al., 2019) and tabular reasoning (Chen et al., 2020; Zhu et al., 2021).
Reasoning with Symbolic Systems.Much research in the literature enhances the reasoning capabilities of machine learning systems by exploiting symbolic systems, including knowledge graphs (Mihaylov and Frank, 2018; Bauer et al., 2018; Kundu et al., 2019; Wang et al., 2019; Lin et al., 2019; Ding et al., 2019; Feng et al., 2020; Wang et al., 2022b), or question taxonomies (Dua et al., 2019; Andor et al., 2019; Hu et al., 2019; Wang et al., 2022a). Although these methods work well on specific benchmarks, they usually require domain-specific designs and human efforts, thus limiting the generalizability.
Reasoning via Language Models.This line of work aims to address reasoning tasks in a general sequence-to-sequence manner, empowered by reasoning-aware pre-training or fine-tuning of language models. For example, Deng et al. (2021)
\begin{table}
\begin{tabular}{l c c} \hline \hline & GSM8K & CommonsenseQA \\ \hline \hline \multicolumn{3}{l}{davinci:} \\ DiVeRSe (without step) & 30.6 & 75.0 \\ \hline DiVeRSe (with step) & **30.9** & **76.0** \\ \hline \multicolumn{3}{l}{text-davinci-002:} \\ DiVeRSe (without step) & 68.9 & 79.2 \\ DiVeRSe (with step) & **70.2** & **79.8** \\ \hline \multicolumn{3}{l}{code-davinci-002:} \\ DiVeRSe (without step) & **82.3** & 78.8 \\ \hline DiVeRSe (with step) & 81.5 & **79.9** \\ \hline \hline \end{tabular}
\end{table}
Table 6: The effectiveness of step-aware voting verifier, with \(\langle M_{1},M_{2}\rangle=\langle 5,20\rangle\).
Figure 7: GSM8K accuracy at different \(M\) values (how many reasoning paths are used for each question).
proposed to train the language model with crawled data from the internet; Asai and Hajishirzi (2020) proposed a logic-guided data augmentation method to pre-train the language model; Shen et al. (2021); Cobbe et al. (2021) proposed to train a verifier to rank solutions sampled from fine-tuned language models; Geva et al. (2020); Yoran et al. (2022); Campagna et al. (2020); Wang et al. (2022) proposed to equip language models with reasoning abilities by generating training examples with human-designed templates; Pi et al. (2022) proposed to inject reasoning capabilities into language models by continual pre-training on program execution data.
Reasoning via Prompting Gigantic Language Models.Gigantic language models like GPT-3 Brown et al. (2020) have demonstrated impressive few-shot learning capabilities in many tasks and have attracted many research interests on making gigantic language models better few-shot learners Zhao et al. (2021); Holtzman et al. (2021); Min et al. (2021); Liu et al. (2022); Lu et al. (2021); Rubin et al. (2021); Min et al. (2022). However, these methods struggle to address tasks requiring reasoning skills. To mitigate this, recently there is a line of research that focuses on unleashing the reasoning capabilities of gigantic language models via better prompting strategies. Wei et al. (2022) proposed _chain-of-thought reasoning_, of which the key insight is the insertion of multi-step reasoning paths before generating the final answers; Wang et al. (2022) proposed to improve chain-of-thought reasoning via _self-consistency_, of which the key insight is to conclude the most consistent answer from different reasoning paths sampled from the language model; Zhou et al. (2022); Creswell et al. (2022) proposed to leverage gigantic language models to decompose questions into sub-questions, thereby addressing them in an iterative manner; Kojima et al. (2022) proposed that gigantic language models can even be good zero-shot reasoners, by designing prompts that can induce language models to do reasoning step-by-step; Lampinen et al. (2022) proposed building a prompt by selecting examples and explanations together, thus substantially improving performance over selecting examples alone. Despite their great successes, these works come with their limitations. This paper is a continuation of this line of research, focusing on diverse verifier on reasoning steps.
## 8 Conclusion and Future Work
In this paper, we present DiVeRSe, a novel and general method to enhance the reasoning abilities of large language models. Our method builds on the idea of prompting language models with multi-step reasoning paths, but introduces three key innovations: diverse prompts, voting verifier, and stepwise verifier. The latter is especially novel and effective, as it verifies each reasoning step separately and we provides a detailed analysis of the model's behavior in each step. We demonstrate the superiority of DiVeRSe through extensive experiments. For instance, using _code-davinci-002_, our method achieves state-of-the-art performance on most reasoning tasks, surpassing the 540B PaLM model with previous prompting techniques.
There are many directions for our future work. (1) As discussed in Appendix B.2, we will continue to investigate how to reduce or recognize false positive pseudo exemplars. (2) We plan to investigate mechanisms to produce better diverse prompts than
Figure 8: DiVeRSe performance (_code-davinci-002_) on GSM8K with different sizes of the training dataset (without labeled reasoning paths).
Figure 9: DiVeRSe performance (_code-davinci-002_) on GSM8K when each prompt contains \(3/5/8\) exemplars.
simple sampling. (3) We will extend DiVeRSe to other tasks and continue to design better prompting techniques to elicit the power of gigantic language models.
## 9 Limitations
Computing Resources.Despite the surprising performance it achieves, our framework needs to be applied to large language models like GPT-3 or PaLM. Inference with these models costs more time and budgets than fine-tuning models like RoBERTa Liu et al. (2019).
Faithfulness.Although DiVeRSe can significantly improve the accuracy of final answers, we still cannot guarantee that the reasoning paths produced by the language models are 100 percent faithful. This is the key challenge and future direction for this line of research (chain-of-thought reasoning).
More Training Data.DiVeRSe needs more labeled data with well-annotated reasoning paths to construct diverse prompts, and it also needs a training dataset for supervising the verifier. However, from another point of view, this limitation can also be regarded as a contribution that studies how chain-of-thought reasoning can be further improved if we have more training data than just a few exemplars.
Human Evaluation of Reasoning Steps.We use human evaluation to measure the quality of the intermediate steps in reasoning paths since few current works provide reliable frameworks to evaluate the quality of reasoning steps.
## References |
2,206.04615 | 2,206.04615 | Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models | "Language models demonstrate both quantitative improvement and new qualitative\ncapabilities with in(...TRUNCATED) | http://arxiv.org/pdf/2206.04615 | "['Aarohi Srivastava' 'Abhinav Rastogi' 'Abhishek Rao' 'Abu Awal Md Shoeb'\n 'Abubakar Abid' 'Adam F(...TRUNCATED) | ['cs.CL' 'cs.AI' 'cs.CY' 'cs.LG' 'stat.ML'] | 27 pages, 17 figures + references and appendices, repo:
https://github.com/google/BIG-bench | Transactions on Machine Learning Research, May/2022,
https://openreview.net/forum?id=uyTL5Bvosj | cs.CL | 20,220,609 | 20,230,612 | "\n\n* Wikiquote et al. (2021) Wikiquote, russian proverbs. URL [https://ru.wikiquote.org/wiki/%DO%A(...TRUNCATED) | "# Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models\n\nA(...TRUNCATED) | "# Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models\n\nA(...TRUNCATED) |
2,206.05229 | 2,206.05229 | Measuring the Carbon Intensity of AI in Cloud Instances | "By providing unprecedented access to computational resources, cloud computing\nhas enabled rapid gr(...TRUNCATED) | http://arxiv.org/pdf/2206.05229 | "['Jesse Dodge' 'Taylor Prewitt' 'Remi Tachet Des Combes' 'Erika Odmark'\n 'Roy Schwartz' 'Emma Stru(...TRUNCATED) | ['cs.LG'] | In ACM Conference on Fairness, Accountability, and Transparency (ACM
FAccT) 2022 | null | cs.LG | 20,220,610 | 20,220,610 | "\n\n* (1)\n* Anthony et al. (2020) Lasse F. Wolff Anthony, Benjamin Kanding, and Raghavendra Selvan(...TRUNCATED) | "[MISSING_PAGE_EMPTY:1]\n\nIntroduction\n\nClimate change is an increasing threat to life on our pla(...TRUNCATED) | "[MISSING_PAGE_EMPTY:1]\n\nIntroduction\n\nClimate change is an increasing threat to life on our pla(...TRUNCATED) |
2,206.05802 | 2,206.05802 | Self-critiquing models for assisting human evaluators | "We fine-tune large language models to write natural language critiques\n(natural language critical (...TRUNCATED) | http://arxiv.org/pdf/2206.05802 | "['William Saunders' 'Catherine Yeh' 'Jeff Wu' 'Steven Bills' 'Long Ouyang'\n 'Jonathan Ward' 'Jan L(...TRUNCATED) | ['cs.CL' 'cs.LG'] | null | null | cs.CL | 20,220,612 | 20,220,614 | " (RLHP) has become more common [1, 2, 3, 4], demonstrating empirically a technique that lets us rea(...TRUNCATED) | "# Self-critiquing models for assisting human evaluators\n\nWilliam Saunders\n\n&Catherine Yeh\n\n&J(...TRUNCATED) | "# Self-critiquing models for assisting human evaluators\n\nWilliam Saunders\n\n&Catherine Yeh\n\n&J(...TRUNCATED) |
2,206.06336 | 2,206.06336 | Language Models are General-Purpose Interfaces | "Foundation models have received much attention due to their effectiveness\nacross a broad range of (...TRUNCATED) | http://arxiv.org/pdf/2206.06336 | "['Yaru Hao' 'Haoyu Song' 'Li Dong' 'Shaohan Huang' 'Zewen Chi'\n 'Wenhui Wang' 'Shuming Ma' 'Furu W(...TRUNCATED) | ['cs.CL'] | 32 pages. The first three authors contribute equally | null | cs.CL | 20,220,613 | 20,220,613 | "\n\n* Agrawal et al. (2019) Harsh Agrawal, Karan Desai, Yufei Wang, Xinlei Chen, Rishabh Jain, Mark(...TRUNCATED) | "# Language Models are General-Purpose Interfaces\n\n Yaru Hao, Haoyu Song, Li Dong\n\nShaohan Huang(...TRUNCATED) | "# Language Models are General-Purpose Interfaces\n\n Yaru Hao, Haoyu Song, Li Dong\n\nShaohan Huang(...TRUNCATED) |
2,206.07635 | 2,206.07635 | AI Ethics Issues in Real World: Evidence from AI Incident Database | "With the powerful performance of Artificial Intelligence (AI) also comes\nprevalent ethical issues.(...TRUNCATED) | http://arxiv.org/pdf/2206.07635 | ['Mengyi Wei' 'Zhixuan Zhou'] | ['cs.AI' 'cs.CY'] | 56th Hawaii International Conference on System Sciences (HICSS) | null | cs.AI | 20,220,615 | 20,220,818 | "\n\n* [1] I. Glenn Cohen and Michelle M. Mello. Big data, big tech, and protecting patient privacy.(...TRUNCATED) | "# AI Ethics Issues in Real World: Evidence from AI Incident Database\n\n Mengyi Wei\n\nTechnical Un(...TRUNCATED) | "# AI Ethics Issues in Real World: Evidence from AI Incident Database\n\n Mengyi Wei\n\nTechnical Un(...TRUNCATED) |
2,206.14858 | 2,206.14858 | Solving Quantitative Reasoning Problems with Language Models | "Language models have achieved remarkable performance on a wide range of tasks\nthat require natural(...TRUNCATED) | http://arxiv.org/pdf/2206.14858 | "['Aitor Lewkowycz' 'Anders Andreassen' 'David Dohan' 'Ethan Dyer'\n 'Henryk Michalewski' 'Vinay Ram(...TRUNCATED) | ['cs.CL' 'cs.AI' 'cs.LG'] | 12 pages, 5 figures + references and appendices | null | cs.CL | 20,220,629 | 20,220,701 | "**: A human should be able to solve each problem and understand the given solution without having t(...TRUNCATED) | "# Solving Quantitative Reasoning Problems with Language Models\n\nAitor Lewkowycz, Anders Andreasse(...TRUNCATED) | "# Solving Quantitative Reasoning Problems with Language Models\n\nAitor Lewkowycz, Anders Andreasse(...TRUNCATED) |
2,207.0056 | 2,207.0056 | Is neural language acquisition similar to natural? A chronological probing study | "The probing methodology allows one to obtain a partial representation of\nlinguistic phenomena stor(...TRUNCATED) | http://arxiv.org/pdf/2207.00560 | ['Ekaterina Voloshina' 'Oleg Serikov' 'Tatiana Shavrina'] | ['cs.CL'] | "Published in proceedings of Dialogue-2022 \"Computational Linguistics\n and Intellectual Technolog(...TRUNCATED) | null | cs.CL | 20,220,701 | 20,220,701 | "\n\n* [Belinkov et al.2017] Yonatan Belinkov, Lluis Marquez, Hassan Sajjad, Nadir Durrani, Fahim Da(...TRUNCATED) | "[MISSING_PAGE_FAIL:1]\n\nIntroduction\n\nThe role of deep learning language models has been increas(...TRUNCATED) | "[MISSING_PAGE_FAIL:1]\n\nIntroduction\n\nThe role of deep learning language models has been increas(...TRUNCATED) |
2,207.04672 | 2,207.04672 | No Language Left Behind: Scaling Human-Centered Machine Translation | "Driven by the goal of eradicating language barriers on a global scale,\nmachine translation has sol(...TRUNCATED) | http://arxiv.org/pdf/2207.04672 | "['NLLB Team' 'Marta R. Costa-jussà' 'James Cross' 'Onur Çelebi'\n 'Maha Elbayad' 'Kenneth Heafiel(...TRUNCATED) | ['cs.CL' 'cs.AI' '68T50' 'I.2.7'] | 190 pages | null | cs.CL | 20,220,711 | 20,220,825 | " to evaluation and training data.\n* Primary intended users: _Primary users are researchers and mac(...TRUNCATED) | "**No Language Left Behind:**\n\n**Scaling Human-Centered Machine Translation**\n\nNLLB Team, Marta (...TRUNCATED) | "**No Language Left Behind:**\n\n**Scaling Human-Centered Machine Translation**\n\nNLLB Team, Marta (...TRUNCATED) |
2,207.05221 | 2,207.05221 | Language Models (Mostly) Know What They Know | "We study whether language models can evaluate the validity of their own\nclaims and predict which q(...TRUNCATED) | http://arxiv.org/pdf/2207.05221 | "['Saurav Kadavath' 'Tom Conerly' 'Amanda Askell' 'Tom Henighan'\n 'Dawn Drain' 'Ethan Perez' 'Nicho(...TRUNCATED) | ['cs.CL' 'cs.AI' 'cs.LG'] | 23+17 pages; refs added, typos fixed | null | cs.CL | 20,220,711 | 20,221,121 | "\n\n* [Ahn et al., 2022] Ahn, M., Brohan, A., Brown, N., Chebotar, Y., Cortes, O., David, B., Finn,(...TRUNCATED) | "# Language Models (Mostly) Know What They Know\n\n Saurav Kadavath, Tom Conerly, Amanda Askell, Tom(...TRUNCATED) | "# Language Models (Mostly) Know What They Know\n\n Saurav Kadavath, Tom Conerly, Amanda Askell, Tom(...TRUNCATED) |
Dataset Description
The "arxiv_small_nougat" dataset is a collection of 108 recent papers sourced from arXiv, focusing on topics related to Large Language Models (LLM) and Transformers. These papers have been meticulously processed and parsed using Meta's Nougat model, which is specifically designed to retain the integrity of complex elements such as tables and mathematical equations.
Data Format
The dataset contains the parsed content of the selected papers, with special attention given to the preservation of formatting, tables, and mathematical expressions. Each paper is provided as plain text.
Usage
Researchers, academics, and natural language processing practitioners can leverage this dataset for various tasks related to LLM and Transformers, including:
- Language modeling
- Text summarization
- Information retrieval
- Table and equation extraction
Acknowledgments
We acknowledge the arXiv platform for providing open access to a wealth of research papers in the field of machine learning and natural language processing.
License
[mit]
- Downloads last month
- 48