With the rise of complex cyber devices Cyber Forensics (CF) is facing many new challenges. For example, there are dozens of systems running on smartphones, each with more than millions of downloadable applications. Sifting through this large amount of data and making sense requires new techniques, such as from the field of Artificial Intelligence (AI). To apply these techniques successfully in CF, we need to justify and explain the results to the stakeholders of CF, such as forensic analysts and members of the court, for them to make an informed decision. If we want to apply AI successfully in CF, there is a need to develop trust in AI systems. Some other factors in accepting the use of AI in CF are to make AI authentic, interpretable, understandable, and interactive. This way, AI systems will be more acceptable to the public and ensure alignment with legal standards. An explainable AI (XAI) system can play this role in CF, and we call such a system XAI-CF. XAI-CF is indispensable and is still in its infancy. In this paper, we explore and make a case for the significance and advantages of XAI-CF. We strongly emphasize the need to build a successful and practical XAI-CF system and discuss some of the main requirements and prerequisites of such a system. We present a formal definition of the terms CF and XAI-CF and a comprehensive literature review of previous works that apply and utilize XAI to build and increase trust in CF. We discuss some challenges facing XAI-CF. We also provide some concrete solutions to these challenges. We identify key insights and future research directions for building XAI applications for CF. This paper is an effort to explore and familiarize the readers with the role of XAI applications in CF, and we believe that our work provides a promising basis for future researchers interested in XAI-CF.
翻译:随着复杂网络设备的兴起,网络取证(CF)正面临诸多新挑战。例如,智能手机上运行着数十种系统,每种系统都拥有超过百万的可下载应用程序。要从如此海量的数据中筛选并提取有效信息,需要借助人工智能(AI)等领域的新技术。为了在网络取证中成功应用这些技术,我们必须向取证分析师、法庭成员等网络取证相关方证明并解释结果,以支持其做出明智决策。若要在网络取证中成功应用人工智能,需建立对AI系统的信任。接受AI在网络取证中应用的其他因素还包括:确保AI具有真实性、可解释性、可理解性和交互性。如此,AI系统将更易被公众接受,并确保符合法律标准。可解释AI(XAI)系统可在网络取证中发挥这一作用,我们称此类系统为XAI-CF。XAI-CF不可或缺且仍处于发展初期。本文探讨并论证了XAI-CF的重要意义与优势。我们着重强调构建成功且实用的XAI-CF系统的必要性,并讨论了该系统所需的主要条件与前提。我们给出了网络取证和XAI-CF的正式定义,并对既往应用XAI以建立和增强网络取证信任的研究进行了全面文献综述。文中探讨了XAI-CF面临的部分挑战,并提供了相应的具体解决方案。我们进一步提出了构建面向网络取证的可解释AI应用的关键见解与未来研究方向。本文旨在探索并向读者介绍可解释AI在网络取证中的应用角色,我们相信这项工作能为未来关注XAI-CF的研究者奠定坚实基础。