Neuro-Symbolic AI for Transparent and Explainable Network Security Models

##plugins.themes.academic_pro.article.main##

Arooj Basharat
Hadia Azmat

Abstract

As artificial intelligence (AI) continues to transform the cybersecurity landscape, the need for transparent and explainable security models has never been greater. Traditional machine learning-based security models often operate as “black boxes,” providing high performance but little insight into how decisions are made. This lack of transparency poses significant challenges, particularly when critical decisions regarding network security are involved. Neuro-symbolic AI, a hybrid approach that combines the strengths of neural networks and symbolic reasoning, offers a promising solution to this problem. By combining the pattern recognition power of deep learning with the structured, interpretable nature of symbolic logic, neuro-symbolic AI can provide both high-performance security and transparency. This paper explores the potential of neuro-symbolic AI to create explainable and transparent network security models. We delve into the core components of neuro-symbolic AI, the benefits it offers over traditional methods, and its application in enhancing security systems. Additionally, we explore the challenges of implementing these models in real-world cybersecurity environments, offering insights into future research directions and the evolving role of explainable AI in the fight against cyber threats.

##plugins.themes.academic_pro.article.details##

How to Cite
Arooj Basharat, A. B., & Azmat, H. (2024). Neuro-Symbolic AI for Transparent and Explainable Network Security Models. Pioneer Research Journal of Computing Science, 1(3), 23–33. Retrieved from http://prjcs.com/index.php/prjcs/article/view/39