Language models (LMs) have achieved impressive accuracy across a variety of tasks but remain vulnerable to high-confidence misclassifications, also referred to as unknown unknowns (UUs). These UUs cluster into blind spots in the feature space, leading to significant risks in high-stakes applications. This is particularly relevant for smaller, lightweight LMs that are more susceptible to such errors. While the identification of UUs has been extensively studied, their mitigation remains an open challenge, including how to use identified UUs to eliminate unseen blind spots. In this work, we propose a novel approach to address blind spot mitigation through the use of intelligent agents -- either humans or large LMs -- as teachers to characterize UU-type errors. By leveraging the generalization capabilities of intelligent agents, we identify patterns in high-confidence misclassifications and use them to generate targeted synthetic samples to improve model robustness and reduce blind spots. We conduct an extensive evaluation of our method on three classification tasks and demonstrate its effectiveness in reducing the number of UUs, all while maintaining a similar level of accuracy. We find that the effectiveness of human computation has a high ceiling but is highly dependent on familiarity with the underlying task. Moreover, the cost gap between humans and LMs surpasses an order of magnitude, as LMs attain human-like generalization and generation performance while being more scalable.
翻译:语言模型(LMs)已在多种任务中展现出令人瞩目的准确性,但仍易出现高置信度的错误分类,这类错误亦被称为未知未知(UUs)。这些UUs在特征空间中聚集形成盲点,给高风险应用带来显著隐患。对于规模较小、轻量级的LMs而言,此类问题尤为突出,它们更易受此类错误影响。尽管UUs的识别已得到广泛研究,但其缓解策略仍是待解决的难题,包括如何利用已识别的UUs来消除未观测到的盲点。本研究提出一种创新方法,通过引入智能体(人类或大型LMs)作为教师来刻画UU型错误特征,从而解决盲点缓解问题。借助智能体的泛化能力,我们识别高置信度误判中的规律模式,并据此生成针对性合成样本,以提升模型鲁棒性并减少盲点。我们在三项分类任务上对本方法进行了全面评估,证明其能有效减少UUs数量,同时保持相近的准确率水平。研究发现,人类计算的效能存在较高上限,但高度依赖于对底层任务的熟悉程度。此外,人类与LMs之间的成本差距超过一个数量级,因为LMs在实现类人泛化与生成性能的同时具备更优的可扩展性。