Time series anomaly detection is a critical machine learning task for numerous applications, such as finance, healthcare, and industrial systems. However, even high-performed models may exhibit potential issues such as biases, leading to unreliable outcomes and misplaced confidence. While model explanation techniques, particularly visual explanations, offer valuable insights to detect such issues by elucidating model attributions of their decision, many limitations still exist -- They are primarily instance-based and not scalable across dataset, and they provide one-directional information from the model to the human side, lacking a mechanism for users to address detected issues. To fulfill these gaps, we introduce HILAD, a novel framework designed to foster a dynamic and bidirectional collaboration between humans and AI for enhancing anomaly detection models in time series. Through our visual interface, HILAD empowers domain experts to detect, interpret, and correct unexpected model behaviors at scale. Our evaluation with two time series datasets and user studies demonstrates the effectiveness of HILAD in fostering a deeper human understanding, immediate corrective actions, and the reliability enhancement of models.
翻译:时间序列异常检测是金融、医疗和工业系统等多种应用中的关键机器学习任务。然而,即使是高性能模型也可能存在潜在问题,例如偏差,从而导致不可靠的结果和错误的置信度。尽管模型解释技术(尤其是视觉解释)通过阐明模型决策的归因,为检测此类问题提供了宝贵的见解,但仍存在诸多局限性——它们主要是基于实例的,无法在数据集间扩展,并且提供了从模型到人类单向的信息,缺乏让用户解决已检测问题的机制。为弥补这些不足,我们引入了HILAD——一种新颖的框架,旨在促进人类与人工智能之间动态双向的协作,以增强时间序列中的异常检测模型。通过我们的视觉界面,HILAD使领域专家能够大规模地检测、解释和纠正模型意外行为。我们在两个时间序列数据集上的评估以及用户研究表明,HILAD在促进人类更深层次的理解、即时纠正措施以及模型可靠性提升方面具有显著有效性。