Ogunboyo, Awolesi Abolanle (2025) Neuro-Symbolic Generative AI for Explainable Reasoning. International Journal of Science and Research Archive, 16 (1). pp. 121-125. ISSN 2582-8185
Abstract
The integration of neural and symbolic systems termed neuro-symbolic AI presents a compelling path toward explainable reasoning in Artificial Intelligence (AI). While deep learning models excel at pattern recognition and generative capabilities, their opaque decision-making process has raised concerns about transparency, interpretability, and trustworthiness. This research investigates the convergence of generative AI and neuro-symbolic architectures to enhance explainable reasoning. Employing a mixed-methods methodology grounded in empirical evaluation, knowledge representation, and symbolic rule induction, the study presents a hybrid framework where large language models (LLMs) are augmented with symbolic reasoning layers, allowing for natural language generation with traceable logic paths. Experimental results on benchmark datasets such as CLEVR, e-SNLI, and RuleTakers demonstrate substantial improvements in logical coherence, reasoning accuracy, and explanation fidelity over purely neural baselines. The study further explores implications for regulated domains, including healthcare, law, and cybersecurity. This work provides a foundation for future AI systems that are powerful in generation and transparent in justification, offering an interpretable-by-design approach to responsible AI.
Item Type: | Article |
---|---|
Official URL: | https://doi.org/10.30574/ijsra.2025.16.1.2019 |
Uncontrolled Keywords: | Neuro-Symbolic AI; Generative AI; Explainable Reasoning; Symbolic Logic; Large Language Models; Trustworthy AI |
Date Deposited: | 01 Sep 2025 12:04 |
Related URLs: | |
URI: | https://eprint.scholarsrepository.com/id/eprint/4266 |