With the rise of fifth-generation (5G) networks in critical applications, it is urgent to move from detection of malicious activity to systems capable of providing a reliable verdict suitable for mitigation. In this regard, understanding and interpreting machine learning (ML) models' security alerts is crucial for enabling actionable incident response orchestration. Explainable Artificial Intelligence (XAI) techniques are expected to enhance trust by providing insights into why alerts are raised. Under the umbrella of XAI, interpretability of outcomes is crucially dependent on understanding the influence of specific inputs, referred to as feature attribution. {A dominant approach to feature attribution statistically associates feature sets that can be correlated to a given alert. This paper investigates its merits against the backdrop of criticism from recent literature, in comparison with feature attribution based on logic. We extensively study two methods, SHAP and VoTE-XAI, as representatives of each feature attribution approach by analyzing their interpretations of alerts generated by an XGBoost model across three 5G-relevant datasets (5G-NIDD, MSA, and PFCP) covering multiple attack scenarios. We identify three metrics for assessing explanations: sparsity, how concise they are; stability, how consistent they are across samples from the same attack type; and efficiency, how fast an explanation is generated. Our results reveal that logic-based attributions are consistently more sparse and stable across alerts. More importantly, we found a significant divergence between features selected by SHAP and VoTE-XAI. However, none of the top-ranked features selected by SHAP were missed by VoTE-XAI. Finally, we analyze the efficiency of both methods, discussing their suitability for real-time security monitoring even in high-dimensional 5G environments (478 features).
翻译:暂无翻译