Industrial Cyber-Physical Systems (CPS) are sensitive infrastructure from both safety and economics perspectives, making their reliability critically important. Machine Learning (ML), specifically deep learning, is increasingly integrated in industrial CPS, but the inherent complexity of ML models results in non-transparent operation. Rigorous evaluation is needed to prevent models from exhibiting unexpected behaviour on future, unseen data. Explainable AI (XAI) can be used to uncover model reasoning, allowing a more extensive analysis of behaviour. We apply XAI to to improve predictive performance of ML models intended for industrial CPS. We analyse the effects of components from time-series data decomposition on model predictions using SHAP values. Through this method, we observe evidence on the lack of sufficient contextual information during model training. By increasing the window size of data instances, informed by the XAI findings, we are able to improve model performance.
翻译:工业信息物理系统(CPS)在安全性和经济性层面均属于敏感基础设施,其可靠性至关重要。机器学习(ML),特别是深度学习,正日益融入工业CPS,但ML模型固有的复杂性导致其运行缺乏透明度。为防止模型在未来未见数据上出现意外行为,需要进行严格评估。可解释人工智能(XAI)可用于揭示模型推理过程,从而实现对行为更全面的分析。本研究应用XAI提升面向工业CPS的ML模型预测性能。我们利用SHAP值分析时间序列数据分解的各个分量对模型预测的影响。通过该方法,我们发现了模型训练期间上下文信息不足的证据。根据XAI分析结果扩大数据实例的窗口尺寸后,我们成功提升了模型性能。