Synthetic Lethal (SL) relationships, though rare among the vast array of gene combinations, hold substantial promise for targeted cancer therapy. Despite advancements in AI model accuracy, there is still a significant need among domain experts for interpretive paths and mechanism explorations that align better with domain-specific knowledge, particularly due to the high costs of experimentation. To address this gap, we propose an iterative Human-AI collaborative framework with two key components: 1) Human-Engaged Knowledge Graph Refinement based on Metapath Strategies, which leverages insights from interpretive paths and domain expertise to refine the knowledge graph through metapath strategies with appropriate granularity. 2) Cross-Granularity SL Interpretation Enhancement and Mechanism Analysis, which aids experts in organizing and comparing predictions and interpretive paths across different granularities, uncovering new SL relationships, enhancing result interpretation, and elucidating potential mechanisms inferred by Graph Neural Network (GNN) models. These components cyclically optimize model predictions and mechanism explorations, enhancing expert involvement and intervention to build trust. Facilitated by SLInterpreter, this framework ensures that newly generated interpretive paths increasingly align with domain knowledge and adhere more closely to real-world biological principles through iterative Human-AI collaboration. We evaluate the framework's efficacy through a case study and expert interviews.
翻译:合成致死性(SL)关系虽然在众多基因组合中较为罕见,但对靶向癌症治疗具有重要前景。尽管人工智能模型的准确性有所提高,但领域专家仍迫切需要更符合领域知识的解释路径与机制探索,这尤其源于实验的高昂成本。为弥补这一差距,我们提出一种迭代式人机协同框架,包含两个关键组成部分:1)基于元路径策略的人工参与知识图谱优化,该组件利用解释路径的洞察与领域专业知识,通过具有适当粒度的元路径策略来优化知识图谱;2)跨粒度SL解释增强与机制分析,该组件协助专家组织和比较不同粒度下的预测与解释路径,从而发现新的SL关系、增强结果解释、并阐明由图神经网络(GNN)模型推断的潜在机制。这些组件循环优化模型预测与机制探索,通过增强专家的参与和干预来建立信任。在SLInterpreter的支持下,该框架通过迭代式人机协作,确保新生成的解释路径日益符合领域知识,并更紧密地遵循现实世界的生物学原理。我们通过案例研究与专家访谈评估了该框架的有效性。