Developing decision-support systems that complement human performance in classification tasks remains an open challenge. A popular approach, Learning to Defer (LtD), allows a Machine Learning (ML) model to pass difficult cases to a human expert. However, LtD treats humans and ML models as mutually exclusive decision-makers, restricting the expert contribution to mere predictions. To address this limitation, we propose Learning to Ask (LtA), a new framework that handles both when and how to incorporate expert input in an ML model. LtA is based on a two-part architecture: a standard ML model and an enriched model trained with additional expert human feedback, with a formally optimal strategy for selecting when to query the enriched model. We provide two practical implementations of LtA: a sequential approach, which trains the models in stages, and a joint approach, which optimises them simultaneously. For the latter, we design surrogate losses with realisable-consistency guarantees. Our experiments with synthetic and real expert data demonstrate that LtA provides a more flexible and powerful foundation for effective human-AI collaboration.
翻译:开发能够辅助人类在分类任务中表现的决策支持系统仍然是一个开放性的挑战。一种流行的方法——学习延迟(LtD),允许机器学习(ML)模型将困难案例传递给人类专家。然而,LtD将人类和ML模型视为互斥的决策者,将专家的贡献限制在仅仅提供预测。为解决这一局限,我们提出了学习询问(LtA),这是一个处理何时以及如何将专家输入整合到ML模型中的新框架。LtA基于一个两部分架构:一个标准的ML模型和一个使用额外专家人类反馈训练的增强模型,并采用形式最优策略来选择何时查询增强模型。我们提供了LtA的两种实际实现:一种顺序方法,分阶段训练模型;以及一种联合方法,同时优化它们。对于后者,我们设计了具有可实现一致性保证的代理损失函数。我们在合成和真实专家数据上的实验表明,LtA为有效的人机协作提供了一个更灵活、更强大的基础。