Investigating the Robustness of Deductive Reasoning with Large Language Models

Open Access
Authors
Publication date 2025
Host editors
  • InĂªs Lynce
  • Nello Murano
  • Mauro Vallati
  • Serena Villata
  • Federico Chesani
  • Michela Milano
  • Andrea Omicini
  • Mehdi Dastani
Book title ECAI 2025
Book subtitle 28th European Conference on Artificial Intelligence, 25-30 October2025, Bologna, Italy : including 14th Conference on Prestigious Applications of Intelligent Systems (PAIS 2025) : proceedings
ISBN (electronic)
  • 9781643686318
Series Frontiers in Artificial Intelligence and Applications
Event 28th European Conference on Artificial Intelligence, ECAI 2025, including 14th Conference on Prestigious Applications of Intelligent Systems, PAIS 2025
Pages (from-to) 1776-1783
Number of pages 8
Publisher Amsterdam: IOS Press
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract

Large Language Models (LLMs) have been shown to achieve impressive results for many reasoning-based Natural Language Processing (NLP) tasks, suggesting a degree of deductive reasoning capability. However, it remains unclear to which extent LLMs, in both informal and autoformalisation methods, are robust on logical deduction tasks. Moreover, while many LLM-based deduction methods have been proposed, a systematic study that analyses the impact of their design components is lacking. Addressing these two challenges, we propose the first study of the robustness of formal and informal LLM-based deductive reasoning methods. We devise a framework with two families of perturbations: adversarial noise and counterfactual statements, which jointly generate seven perturbed datasets. We organize the landscape of LLM reasoners according to their reasoning format, formalisation syntax, and feedback for error recovery. The results show that adversarial noise affects autoformalisation, while counterfactual statements influence all approaches. Detailed feedback does not improve overall accuracy despite reducing syntax errors, pointing to the challenge of LLM-based methods to self-correct effectively.

Document type Conference contribution
Language English
Published at https://doi.org/10.3233/FAIA251007
Other links https://www.scopus.com/pages/publications/105024459825
Downloads
FAIA-413-FAIA251007 (Final published version)
Permalink to this page
Back