Neurosymbolic (NeSy) artificial intelligence describes the combination of logic or rule-based techniques with neural networks. Compared to neural approaches, NeSy methods often possess enhanced interpretability, which is particularly promising for biomedical applications like drug discovery. However, since interpretability is broadly defined, there are no clear guidelines for assessing the biological plausibility of model interpretations. To assess interpretability in the context of drug discovery, we devise a novel prediction task, called drug mechanism-of-action (MoA) deconvolution, with an associated, tailored knowledge graph (KG), MoA-net. We then develop the MoA Retrieval System (MARS), a NeSy approach for drug discovery which leverages logical rules with learned rule weights. Using this interpretable feature alongside domain knowledge, we find that MARS and other NeSy approaches on KGs are susceptible to reasoning shortcuts, in which the prediction of true labels is driven by "degree-bias" rather than the domain-based rules. Subsequently, we demonstrate ways to identify and mitigate this. Thereafter, MARS achieves performance on par with current state-of-the-art models while producing model interpretations aligned with known MoAs.
翻译:神经符号人工智能描述了逻辑或基于规则的技术与神经网络的结合。与神经方法相比,神经符号方法通常具有更强的可解释性,这对于药物发现等生物医学应用尤其具有前景。然而,由于可解释性定义宽泛,目前尚无明确指南来评估模型解释的生物学合理性。为了在药物发现背景下评估可解释性,我们设计了一种新颖的预测任务,称为药物作用机制解卷积,并构建了一个相关的定制知识图谱——MoA-net。随后,我们开发了作用机制检索系统,这是一种用于药物发现的神经符号方法,它利用带有学习规则权重的逻辑规则。通过使用这种可解释特征并结合领域知识,我们发现MARS及其他基于知识图谱的神经符号方法容易受到推理捷径的影响,即真实标签的预测由“度偏差”驱动,而非基于领域的规则。随后,我们展示了识别和缓解此问题的方法。此后,MARS在达到与当前最先进模型相当性能的同时,产生了与已知作用机制一致的解释。