A common trait of many machine learning models is that it is often difficult to understand and explain what caused the model to produce the given output. While the explainability of neural networks has been an active field of research in the last years, comparably little is known for quantum machine learning models. Despite a few recent works analyzing some specific aspects of explainability, as of now there is no clear big picture perspective as to what can be expected from quantum learning models in terms of explainability. In this work, we address this issue by identifying promising research avenues in this direction and lining out the expected future results. We additionally propose two explanation methods designed specifically for quantum machine learning models, as first of their kind to the best of our knowledge. Next to our pre-view of the field, we compare both existing and novel methods to explain the predictions of quantum learning models. By studying explainability in quantum machine learning, we can contribute to the sustainable development of the field, preventing trust issues in the future.
翻译:许多机器学习模型的一个共同特征是,通常难以理解和解释模型产生特定输出的原因。虽然神经网络的可解释性在过去几年已成为一个活跃的研究领域,但量子机器学习模型的可解释性研究相对较少。尽管最近有几项工作分析了可解释性的某些特定方面,但截至目前,关于量子学习模型在可解释性方面能提供什么,尚未形成清晰的宏观视角。本工作通过确定该方向有前景的研究路径并勾勒预期未来成果,以解决这一问题。此外,我们提出了两种专为量子机器学习模型设计的解释方法,据我们所知这是该领域的首次尝试。除了对领域的展望,我们还比较了现有及新颖的解释量子学习模型预测的方法。通过研究量子机器学习的可解释性,我们可以促进该领域的可持续发展,避免未来出现信任问题。