Multiple instance learning (MIL) is an effective and widely used approach for weakly supervised machine learning. In histopathology, MIL models have achieved remarkable success in tasks like tumor detection, biomarker prediction, and outcome prognostication. However, MIL explanation methods are still lagging behind, as they are limited to small bag sizes or disregard instance interactions. We revisit MIL through the lens of explainable AI (XAI) and introduce xMIL, a refined framework with more general assumptions. We demonstrate how to obtain improved MIL explanations using layer-wise relevance propagation (LRP) and conduct extensive evaluation experiments on three toy settings and four real-world histopathology datasets. Our approach consistently outperforms previous explanation attempts with particularly improved faithfulness scores on challenging biomarker prediction tasks. Finally, we showcase how xMIL explanations enable pathologists to extract insights from MIL models, representing a significant advance for knowledge discovery and model debugging in digital histopathology. Codes are available at: https://github.com/tubml-pathology/xMIL.
翻译:多示例学习(MIL)是一种有效且广泛使用的弱监督机器学习方法。在组织病理学中,MIL模型在肿瘤检测、生物标志物预测和预后判断等任务中取得了显著成功。然而,MIL的解释方法仍然滞后,因为它们仅限于小规模包或忽略了实例间的相互作用。我们从可解释人工智能(XAI)的视角重新审视MIL,并引入xMIL——一个具有更通用假设的改进框架。我们展示了如何使用分层相关性传播(LRP)获得改进的MIL解释,并在三个模拟场景和四个真实世界组织病理学数据集上进行了广泛的评估实验。我们的方法在具有挑战性的生物标志物预测任务上,特别是忠实度得分方面,持续优于先前的解释尝试。最后,我们展示了xMIL解释如何帮助病理学家从MIL模型中提取洞见,这代表了数字组织病理学中知识发现和模型调试的重要进展。代码可在以下网址获取:https://github.com/tubml-pathology/xMIL。