VeriCoT: Neuro-symbolic Chain-of-Thought Validation via Logical Consistency Checks
Abstract
VeriCoT, a neuro-symbolic method, formalizes and verifies logical arguments in Chain-of-Thought reasoning to improve the reliability and accuracy of LLMs.
LLMs can perform multi-step reasoning through Chain-of-Thought (CoT), but they cannot reliably verify their own logic. Even when they reach correct answers, the underlying reasoning may be flawed, undermining trust in high-stakes scenarios. To mitigate this issue, we introduce VeriCoT, a neuro-symbolic method that extracts and verifies formal logical arguments from CoT reasoning. VeriCoT formalizes each CoT reasoning step into first-order logic and identifies premises that ground the argument in source context, commonsense knowledge, or prior reasoning steps. The symbolic representation enables automated solvers to verify logical validity while the NL premises allow humans and systems to identify ungrounded or fallacious reasoning steps. Experiments on the ProofWriter, LegalBench, and BioASQ datasets show VeriCoT effectively identifies flawed reasoning, and serves as a strong predictor of final answer correctness. We also leverage VeriCoT's verification signal for (1) inference-time self-reflection, (2) supervised fine-tuning (SFT) on VeriCoT-distilled datasets and (3) preference fine-tuning (PFT) with direct preference optimization (DPO) using verification-based pairwise rewards, further improving reasoning validity and accuracy.
Community
LLM CoT reasoning looks smart but can be logically flawed or... just made up. It's time to hold reasoning accountable!
We built VeriCoT to do just that. VeriCoT extracts the core argument of the CoT using well-formed symbolic notions of logical support. It formalizes every CoT step into first-order logic and finds the exact premise it's built on. This gives us two superpowers:
🤖Automated Proof: Solvers can automatically verify if the logic is valid.
🧑🔬Human-Readable Audits: Natural language premises let you pinpoint ungrounded leaps or fallacies.
Best of all, all these can be used as signals to learn more verifiable models!
To our knowledge, VeriCoT is the first neuro-symbolic validator of CoT traces in non-math/code domains.
📄 Paper: https://arxiv.org/pdf/2511.04662
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LOGicalThought: Logic-Based Ontological Grounding of LLMs for High-Assurance Reasoning (2025)
- Scaling Code-Assisted Chain-of-Thoughts and Instructions for Model Reasoning (2025)
- Adaptive Selection of Symbolic Languages for Improving LLM Logical Reasoning (2025)
- Formal Reasoning for Intelligent QA Systems: A Case Study in the Educational Domain (2025)
- Correct Reasoning Paths Visit Shared Decision Pivots (2025)
- Local Coherence or Global Validity? Investigating RLVR Traces in Math Domains (2025)
- ReTraceQA: Evaluating Reasoning Traces of Small Language Models in Commonsense Question Answering (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper