Providing clear explanations to the choices of machine learning models is essential for these models to be deployed in crucial applications. Counterfactual and semi-factual explanations have emerged as two mechanisms for providing users with insights into the outputs of their models. We provide an overview of the computational complexity results in the literature for generating these explanations, finding that in many cases, generating explanations is computationally hard. We strengthen the argument for this considerably by further contributing our own inapproximability results showing that not only are explanations often hard to generate, but under certain assumptions, they are also hard to approximate. We discuss the implications of these complexity results for the XAI community and for policymakers seeking to regulate explanations in AI.
翻译:为机器学习模型决策提供清晰解释对于其在关键应用中的部署至关重要。反事实与半事实解释已成为两种向用户揭示模型输出内在机制的解释方法。本文系统梳理了文献中关于生成此类解释的计算复杂性研究成果,发现多数情况下生成解释具有计算困难性。我们通过补充证明不可近似性的新结果显著强化了这一论点:不仅解释生成通常具有计算困难性,在某些假设条件下甚至难以获得近似解。最后,我们探讨了这些复杂性结论对可解释人工智能研究社区以及寻求规范人工智能解释政策的立法者的启示。