Many training data attribution (TDA) methods aim to estimate how a model's behavior would change if one or more data points were removed from the training set. Methods based on implicit differentiation, such as influence functions, can be made computationally efficient, but fail to account for underspecification, the implicit bias of the optimization algorithm, or multi-stage training pipelines. By contrast, methods based on unrolling address these issues but face scalability challenges. In this work, we connect the implicit-differentiation-based and unrolling-based approaches and combine their benefits by introducing Source, an approximate unrolling-based TDA method that is computed using an influence-function-like formula. While being computationally efficient compared to unrolling-based approaches, Source is suitable in cases where implicit-differentiation-based approaches struggle, such as in non-converged models and multi-stage training pipelines. Empirically, Source outperforms existing TDA techniques in counterfactual prediction, especially in settings where implicit-differentiation-based approaches fall short.
翻译:许多训练数据归因方法旨在估计,若从训练集中移除一个或多个数据点,模型行为将如何变化。基于隐式微分的方法(如影响函数)虽可提高计算效率,但未能考虑欠规范问题、优化算法的隐式偏差或多阶段训练流程。相比之下,基于展开的方法虽能解决上述问题,却面临可扩展性挑战。本文通过引入Source方法,将基于隐式微分与基于展开的方法联系起来,并融合二者优势——这是一种借助类影响函数公式计算的近似展开式训练数据归因方法。与基于展开的方法相比,Source在保持计算效率的同时,适用于隐式微分方法难以处理的场景,例如未收敛模型与多阶段训练流程。实验表明,Source在反事实预测任务中优于现有训练数据归因技术,尤其在隐式微分方法表现欠佳的场景中优势更为显著。