Deep neural network (DNN) models have achieved phenomenal success for applications in many domains, ranging from academic research in science and engineering to industry and business. The modeling power of DNN is believed to have come from the complexity and over-parameterization of the model, which on the other hand has been criticized for the lack of interpretation. Although certainly not true for every application, in some applications, especially in economics, social science, healthcare industry, and administrative decision making, scientists or practitioners are resistant to use predictions made by a black-box system for multiple reasons. One reason is that a major purpose of a study can be to make discoveries based upon the prediction function, e.g., to reveal the relationships between measurements. Another reason can be that the training dataset is not large enough to make researchers feel completely sure about a purely data-driven result. Being able to examine and interpret the prediction function will enable researchers to connect the result with existing knowledge or gain insights about new directions to explore. Although classic statistical models are much more explainable, their accuracy often falls considerably below DNN. In this paper, we propose an approach to fill the gap between relatively simple explainable models and DNN such that we can more flexibly tune the trade-off between interpretability and accuracy. Our main idea is a mixture of discriminative models that is trained with the guidance from a DNN. Although mixtures of discriminative models have been studied before, our way of generating the mixture is quite different.
翻译:深度神经网络(DNN)模型在众多领域的应用中取得了非凡的成功,其范围涵盖科学与工程的学术研究乃至工业与商业领域。DNN的建模能力被认为源于模型的复杂性和过度参数化,但另一方面,这也因其缺乏可解释性而受到批评。尽管并非所有应用皆如此,但在某些应用场景中,特别是在经济学、社会科学、医疗保健行业和行政决策领域,科学家或从业者出于多种原因对使用黑箱系统做出的预测持保留态度。原因之一在于,研究的主要目的可能是基于预测函数进行发现,例如揭示测量值之间的关系。另一原因可能是训练数据集规模不足,使研究者对纯粹数据驱动的结果无法完全确信。能够检验和解释预测函数将使研究者能够将结果与现有知识联系起来,或获得对新探索方向的洞见。尽管经典统计模型的可解释性更强,但其准确性往往远低于DNN。本文提出一种方法,旨在弥合相对简单的可解释模型与DNN之间的差距,从而更灵活地权衡可解释性与准确性。我们的核心思想是通过DNN的指导来训练一个判别式模型的混合体。尽管判别式模型的混合已有前人研究,但本文生成混合模型的方式截然不同。