Artificial Intelligence (AI) systems have shown good success at classifying. However, the lack of explainability is a true and significant challenge, especially in high-stakes domains, such as health and finance, where understanding is paramount. We propose a new solution to this challenge: an explainable AI framework based on our comparative study with Quantum Boltzmann Machines (QBMs) and Classical Boltzmann Machines (CBMs). We leverage principles of quantum computing within classical machine learning to provide substantive transparency around decision-making. The design involves training both models on a binarised and dimensionally reduced MNIST dataset, where Principal Component Analysis (PCA) is applied for preprocessing. For interpretability, we employ gradient-based saliency maps in QBMs and SHAP (SHapley Additive exPlanations) in CBMs to evaluate feature attributions.QBMs deploy hybrid quantum-classical circuits with strongly entangling layers, allowing for richer latent representations, whereas CBMs serve as a classical baseline that utilises contrastive divergence. Along the way, we found that QBMs outperformed CBMs on classification accuracy (83.5% vs. 54%) and had more concentrated distributions in feature attributions as quantified by entropy (1.27 vs. 1.39). In other words, QBMs not only produced better predictive performance than CBMs, but they also provided clearer identification of "active ingredient" or the most important features behind model predictions. To conclude, our results illustrate that quantum-classical hybrid models can display improvements in both accuracy and interpretability, which leads us toward more trustworthy and explainable AI systems.
翻译:人工智能系统在分类任务中已展现出良好的成功率。然而,可解释性的缺失是一个真实且重大的挑战,尤其在医疗和金融等高风险领域,理解模型决策过程至关重要。我们通过量子玻尔兹曼机与经典玻尔兹曼机的对比研究,提出了一种解决该挑战的新方案:一种基于量子计算原理与经典机器学习相结合的可解释人工智能框架,旨在为决策过程提供实质性透明度。该框架设计包括在二值化并降维的MNIST数据集上训练两种模型,其中采用主成分分析进行预处理。为评估特征归因,我们在QBM中采用基于梯度的显著性图,在CBM中采用SHAP(沙普利加性解释)方法。QBM部署了具有强纠缠层的混合量子-经典电路,从而获得更丰富的潜在表示;而CBM作为经典基线模型,采用对比散度进行训练。实验发现,QBM在分类准确率上显著优于CBM(83.5% vs. 54%),且通过熵值量化的特征归因分布更为集中(1.27 vs. 1.39)。换言之,QBM不仅比CBM具有更好的预测性能,还能更清晰地识别模型预测背后的"活性成分"即关键特征。综上所述,我们的研究结果表明量子-经典混合模型在准确性与可解释性方面均能实现提升,这为推动构建更可信、更可解释的人工智能系统指明了方向。