ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, vol.1, no.1, pp.1-10, 2024 (SCI-Expanded)
With the increasing adoption of artificial intelligence (AI) technologies, popular,
incomprehensible, complex, and opaque machine learning (ML) models, especially neural
network models, are becoming increasingly difficult to understand. This situation worsens the
problem in an area such as cyber security and makes it important. Trusting a system that cannot
explain the reason for important decisions and leaving it alone brings with it many concerns
and can sometimes involve obvious dangers. In particular, the complexity of AI models is
increasing, resulting in black box models that cannot be easily examined, verified, or tested. To
overcome this problem, Explainable AI (XAI) proposes approaches that can be a solution to
this problem with more interpretable, explainable, and understandable AI models and the
resulting outputs. XAI, a holistic approach to the solution, uses various methods to understand,
comprehend, and interpret AI models and even show what data or data regions the decision was
made based on. XAI provides frameworks that help understand and explain the predictions of
AI algorithms, bridging between real intelligence and AI. Although the concept of XAI and the
developed models have attracted great attention recently and are used more intensively in some
areas, they are not yet used sufficiently in Intrusion Detection Systems (IDSs). It is important
for IDSs, which are an important solution in detecting data and attacks, to have high
performance and to explain decisions or learn their justifications. In addition, in order to
develop some examples and carry out studies that can guide future research, there is a need to
apply XAI methods in IDSs, explain the decisions taken and outputs, analyze the results
obtained and convert them into explainable forms or formats. In this study, XAI has been
examined in general and evaluated from different perspectives, and XAIs and their applications
in IDSs have been examined in detail. A detailed explanation of definitions and terminology in
the evolving field of XAI has been put together; Opportunities, challenges, and areas where
further research is needed in the field were examined; Approaches and latest developments,
tools, and technologies for developing XAI applications, as well as the issues that need to be
done for their implementation in artificial intelligence-based IDSs and the risks encountered
are summarized