The predominant de facto paradigm of testing ML models relies on either using only held-out data to compute aggregate evaluation metrics or by assessing the performance on different subgroups. However, such data-only testing methods operate under the restrictive assumption that the available empirical data is the sole input for testing ML models, disregarding valuable contextual information that could guide model testing. In this paper, we challenge the go-to approach of data-only testing and introduce context-aware testing (CAT) which uses context as an inductive bias to guide the search for meaningful model failures. We instantiate the first CAT system, SMART Testing, which employs large language models to hypothesize relevant and likely failures, which are evaluated on data using a self-falsification mechanism. Through empirical evaluations in diverse settings, we show that SMART automatically identifies more relevant and impactful failures than alternatives, demonstrating the potential of CAT as a testing paradigm.
翻译:当前机器学习模型测试的主流范式主要依赖于两种方法:仅使用预留数据计算聚合评估指标,或评估模型在不同子组上的性能。然而,这类纯数据测试方法在严格假设可用经验数据是测试机器学习模型的唯一输入的前提下运行,忽视了可能指导模型测试的宝贵上下文信息。本文挑战了纯数据测试的惯用方法,提出了上下文感知测试(CAT)——一种利用上下文作为归纳偏置来引导寻找有意义模型故障的新范式。我们实现了首个CAT系统SMART Testing,该系统利用大型语言模型假设相关且可能发生的故障,并通过自证伪机制在数据上进行评估。通过在多样化场景中的实证评估,我们证明SMART能自动识别比现有方法更相关、影响更大的故障,展现了CAT作为测试范式的潜力。